8.07.2011

Moving to opentheory.net

After a long run at Blogger I'll be moving to Opentheory.net. The same sort of content, just a cleaner site. I've moved over all the old content, and will transition RSS in a few days.

I'll be leaving this blog as-is, but only posting at the new site. Please update your bookmarks!

7.31.2011

The Invisible Backhand: How Anonymous Has Already Won

The hacker group Anonymous has been on a tear lately, successfully hacking the Tunisian government, Sony, federal cybersecurity contractors, and after suffering from several raids, is now even eyeing the FBI.

It's an interesting era for extreme cyber activism, with the hacker community seemingly finding its voice and becoming very creative in extracting vengeance upon organizations it sees as oppressive. Much has been said about whether this is ethical, if Anonymous can maintain effectiveness, and how things will develop from here. But I think most commentators have missed the point:

Anonymous has already won. And it boils down to one word: insurance.

It looks probable that cybersecurity insurance will become required for many sorts of companies-- the proverbial cat is out of the bag, and even if Anonymous isn't behind the keyboard, so-called "ethical hacking" is likely to increase in popularity. Given this, it'll become as common to hedge your risk from hacking as it is to hedge your risk from fire or flooding. But insurance companies aren't dumb, and it's likely that the premium on cybersecurity insurance will strongly reflect how much of a high-profile hacker target a company is. Just like it's more expensive to insure a mud-foundation coastal house from hurricanes, so too it'll be more expensive to insure a company popularly seen as brazenly greedy against hackers. Companies will have a powerful and quantifiable incentive to not engage in activities that make them a target.

To put this a different way, sometimes companies do things that are legal but unethical. Vigilante justice can 'reinternalize' the externalized costs of these behaviors.

Granted, I'm not saying illegally hacking companies is a good thing, just that Anonymous has the potential to be a very potent market force. They could still snatch defeat from the jaws of victory by being capricious with their targets: if there's little correlation between deed and penalty, insurance premiums will be high across the board. It'll be interesting to see how things turn out.

7.30.2011

Quote of the week: on distractions

From the New York Time's Why Writers Belong Behind Bars:
It’s wonderful that writers can access medieval manuscripts, Swahili dictionaries and collections of 19th-­century daguerreotypes at any moment. But the downside is that it’s almost impossible to finish a sentence without interruption. I confess that even those last 15 words were stalled by a detour, via Wikipedia, to various health Web sites, where I learned that concern was aroused last year by a report that Wi-Fi radiation was causing trees to shed their bark in a Dutch town, and that our excessive Web browsing and e-mailing may also be having ill effects on bees and British children. After an hour of this, I concluded that perhaps an equally urgent scientific study might be conducted on the devastation Wi-Fi has caused to world literature. The damage is surely incalculable.

5.16.2011

Pain/pleasure metaphysics-- a request

Lately I've been looking into causal connections between brain states and pain/pleasure.

I'm finding plenty of material on specifics such as nociceptors, gate circuits, correlative fMRI studies, and so forth, but there doesn't appear to be a lot of research, or even much speculation, on the general question.

What are pain and pleasure, in relation to systemic properties of the brain? E.g., what principles could be used to examine a brain and predict whether it's experiencing pain or pleasure? If we knew someone was experiencing pain or pleasure, what principles could we apply to predict what's going on in their brain?

Ditto for sadness and happiness.

If any readers have perspective on the literature or can put me in touch with someone who does, please let me know.

11.19.2010

Quote: the most important idea in neuroscience?

Mind training is based on the idea that two opposite mental factors cannot happen at the same time. You could go from love to hate, but you cannot at the same time, toward the same object, the same person, want to harm and want to do good. You cannot in the same gesture shake a hand, and give a blow. So there are natural antidotes to emotion that are destructive to our inner well-being.

Humans are very bad at multitasking. This can have a silver lining.

11.12.2010

Cognitive enhancement and a new social contract

Many serious people are projecting that within ten to fifteen years we'll be able to start on a significant program of cognitive enhancement. To craft drugs, hormone cocktails, neurointerfaces, and neuroprotheses that will significantly make their users smarter and more capable, initially to a degree perhaps comparable to the invention of literacy or science, but soon far outstripping any previous transition in the history of the human mind.

If we grant that this is possible, the only real debate is when. 10 years? 15? 50? 100? The gears of capitalism and human nature ensure that it'll come, sooner or later. And I think the only way this won't end in certain disaster is to develop, formalize, and enforce a new social contract regarding human enhancement.

My suggestion? If you want to use biotechnology to make yourself smarter, you also have to use it to make yourself nicer.


If we don't make this the accepted contract, I fear we'll ping-pong between two unpalatable scenarios: either open things up to an enhancement free-for-all (and there's likely a strong correlation between people who most want to be cognitively enhanced and people for whom it's not in society's best interests to grant a competitive advantage), or criminalize enhancement (and if we outlaw enhancement, only outlaws will be enhanced).

8.16.2010

Learning and Memory: location, location, location!

(Skip to near the end for my personal hypothesis about learning and memory.)

Part 4: Location, location, location!

Not long ago, we knew very little about learning and memory. Sure, we understood the basic concepts-- people could learn things, and then recall them later-- but they were black box processes, obfuscated by the complexity of the brain. Everyone knew what they were, but nobody could explain how they worked. We just sort of threw information at people and hoped it stuck.

Fast forward to 2010, and though we're getting better at the practicalities of learning and we can roughly describe the contours of the process, we still don’t really understand the internal mechanics of learning and memory to any fundamentally satisfying degree. These internal mechanics are still mysteries, of which we have no large-scale, generally predictive models. Ask a neuroscientist to explain exactly how you remember where you parked your car, and you’ll get a convoluted answer involving mnemonic association, episodic memory, grid cells, the cerebral cortex, and a few other things in our current neuro-ontology. The description you get may be true as far as it goes, but certainly not satisfying, as our descriptions mostly involve hand-waving at major concepts we think are important, rather than telling a specifically predictive story.

But we’re getting closer. We do know a lot more than we did.

What is Learning?

We've made a great deal of progress dissecting the fundamental nature of learning. The organizing principle of our current understanding of learning is, with a nod to Pavlov, "neurons that fire together, wire together." That is, the body's neural wiring algorithms assume that neurons and neural networks which are often activated at the same time are likely connected, and form connections between them. This simple strategy, applied with great nuance and within the diverse and hierarchical structures of the brain, helps the brain find, internalize, and leverage patterns and drives the translation of conscious processes into subconscious aptitudes and habits of body and mind.

The Structure of Memory

A great deal of effort has been put into exploring the structures of memory. The result has been a set of fairly workable models, which have had some practical success at bringing various pathologies under the umbrella of theory and conforms well with folk psychology and the results of many thousands of memory experiments. They don’t handle all the edge cases well, and in many contexts they’re more descriptive than predictive, but they’re pretty good, as far as they go.

Perhaps the crown jewel of the consensus model is that our memory is more-or-less divided into short-term memory and long-term memory. Short-term memory is essentially our capacity to store static information in our brain without engaging the machinery of medium- and long-term memory. Many experiments peg the capacity as limited to 7 +-2 items; this can vary depending on their complexity, similarity, mnemonic strategies in use, familiarity, and the person in question[1,2], but at any rate, it’s a very finite quantity. Most of the stuff that passes through short-term memory is ultimately lost (or severely compressed): people keep stuff there until they don’t need it, then it’s gone.

Long-term memory, on the other hand, is where stuff in your short-term memory goes if your brain’s heuristics decide it’s worth keeping. To drastically simplify things, if your brain decides something’s important, it’ll send it to the hippocampus, then during sleep your brain processes, consolidates, and compresses what’s in the hippocampus, then sends it off to relevant parts of the brain for long-term storage. Much of it ends up in the cerebral cortex, where it’s more-or-less organized into ordered and linked lists. We know these lists have a directional preference, which is why it’s so difficult to say the alphabet backward (your brain needs to make a new list for it).

The Ephemeral Quality of Short-term Memory

A key finding in recent research is the plastic nature of short-term memory: there's a window of time before they're are translated into long-term storage where memories behave like putty and accessing a memory will change it. During this window, when we recall something it can easily be reinforced, altered, or destroyed... depending primarily what else is going on inside and around us and if we're interrupted while recalling it. (The brain doesn't always have a reliable autosave function.)

This seems odd-- we think of memories as timeless records which may fade with age but are inherently stable. But the science does not back this up-- and given that we don't have memories of our memories, who are we to gainsay it? It appears that it can take between ~1-3 nights of sleep to consolidate a memory into long-term storage, hardening the proverbial putty of short-term memory into a more lasting form.

... and of Long-term memory

After memories are consolidated into long-term memory, they're not ephemeral-- but neither are they permanent, or even particularly stable. It appears that memories are similar to library books: you can 'check them out' from the recesses of your brain and use them, but if you alter the memory when it's in short-term storage, those changes get 'checked back in' and change the original. If you think about a given memory often, you are changing it, for better or worse.

The Limits of our Memory Models

The consensus model has less to say about how the brain classifies and integrates different types of information into memory. We’ve established that different brain regions are strongly associated with certain functions, and it’s certain there’s some sort of elegant sorting mechanism the brain uses to direct information to appropriate regions. But we don’t really have ontologically firm concepts with which to speak about how the brain does sorting or (to some extent) classification. That said, a key result in recent years has been the identification of function-specific brain structures, such as grid and place cells. These functional structures are where a lot of the hottest research is happening, since their bounded contexts are accessible to experimentation and reductionism. Clearly, location and episodic memory must use grid cells in some fashion; clearly, mirror neurons must be deeply relevant to muscle memory and social learning. We just don’t know exactly how yet.

Unfortunately, if we push them hard in any specific direction our models of memory start to look like cardboard cutouts (much like memories themselves- but we digress). They’re wonderful guides to what’s roughly going on, but they don’t have a great deal of depth or precision. If we apply Karl Popper’s evaluative lens that ‘inherent in any good explanation is a prediction, and inherent in any good prediction is an explanation’, we find our models of memory rather constrained: they’re not particularly specifically predictive over most of human experience.

They’re also much more detailed in some areas than in others. We know the limit of short-term memory, for instance, with much more clarity than we know the details of how the hippocampus works or even how information gets recalled once stored.

The Future of Memory Research: Models and Measurement

The limits of any science emerge from what it can and can’t measure, and neuroscience is no exception. We have lots of phenomenological information, which is helpful, but the things we’re having trouble measuring include:

1. being able to tag and track information as it travels through the brain;
2. better quantifying how the brain splits information into chunks and ties them together;
3. measuring how different parts of the brain change information that passes through them (and likewise, how information changes the parts of the brain it passes through);
4. extracting deep functional data from activity scans;
5. designing roughly predictive digital models of brain subsections (other than certain exceptions like the cerebellum).

Progress in any of these areas would drive progress in the others. The productive frontiers in this seem to include:

- improving and melding many sorts of brain scans together. The golden standard today for functional research is fMRI; the next gold standard will be an composite of e.g., high-tesla fMRI, PET scans for gene expression data, EEG and MEG for better temporal resolution, etc.
- better identifying the computational principles which fit the contours of various brain activities (as we’ve done somewhat with memory structure in the cerebral cortex);
- better reverse-engineering the algorithmic approaches taken by brain circuitry (as we’ve done with the visual and auditory cortexes);
- charting out the ‘circuit diagram’ of brain subsections (as we’ve done with the cerebellum);
- simulating the brain.

Reverse-engineering and simulating the brain is a huge topic, one which I’ll cover in another post. Basically though, once we have high-quality neural simulations which allow us to tag and track information as it travels through a virtual brain we may be able to move from a fragmented understanding of memory to something more emergent, experimental, and predictive.

---------------------------------------------------

A Modest Proposal

So that's the current story on learning.

What I want to talk about specifically is something that's not in the current story. An implicit assumption running through this current consensus is that physical location of where peoples' brains happen to store information doesn't matter-- in other words, 1. there is very little variability in *where* similar sets of information get encoded in peoples' brains, and/or 2. when differences occur there are only trivial functional implications.

I think these assumptions will be shown to be significantly false, and if we look underneath them there's a whole new realm of study waiting to be unlocked. In a nutshell, I’m arguing three things:

1. The regional localization of learned information can vary;
2. Regional localization of learned information commonly varies between individuals and learning approaches;
3. Differences in regional localization of learned information have practical significance in cognition and behavior.

I can solidly support (1): aside from the obvious example of right-brain-vs-left-brain lateralization of function, there are many examples of hemispherectomy-- the physical removal of half of the brain-- where patients fully recovered and exhibited no mental deficits.[3] The two significant variables seemed to be age and the speed of degeneration: young people did much better than old, and people who had a slow degenerative disease and thus gave their brains time to migrate information and function away from the diseased hemisphere did much better than those with quicker illnesses.

(2) is more arguable. We simply don’t have good ways to measure where people localize learned information. A lot of people who study the brain take localization invariance for granted-- but once we get the technology, experiments on e.g., tracking information storage and retrieval in musicians and non-musicians where each are taught a song could be interesting. Differences in localization might arise from differences in aptitudes, genetics, some environmental cues, or just randomness.

(3) is still somewhat ambiguous, but I can appeal to the considerable functional significance of the brain’s computational topology, and some work on right vs left hemisphere specializations.[4]

The goal of this suggestion is to help us better quantify different ways of knowing, and to ground this in a functionally-predictive context.

What could cause information to be encoded in one region and not another? How could this guide our behavior and/or treatments? It’s hard to say (yet).


Further Musings:
- A closely related issue is topological constraints on information linkages, where the brain is physically limited from connect any arbitrary node of information to any other node. Consider, e.g., two nodes that are in the same region but 2cm away, and the regional neuronal configuration hinders attempts at making a strong connection. How functionally significant are these sorts of topological limitations? Are they responsible for mental blocks at the level of our experience, like not being able to connect two concepts together very well? Do such intra-regional topologies vary in interestingly distinct ways across individuals?
- I have phrased this in terms of differences in "regional localizations". We can perhaps break this down into
- which brain region information gets encoded into;
- which part of each brain region information gets encoded into;
- what the intra-region encoding patterns are.
I don't think we know enough to estimate the relative contributions of each. But they all point toward the central concept I'm trying to convey, that the topology of information localization differs significantly between people and that this has functional implications.

Edit, 10-3-10: Research into the learning process is really moving quite fast. Recommended links:
- Easier Way To Do Perceptual Learning: "20 minutes of training followed by 20 minutes of listening to a musical tone was just as effective as 40 minutes of training."
- Forget What You Know About Good Study Habits: "psychologists have discovered that some of the most hallowed advice on study habits is flat wrong."

Again, we're only able to see the outward-facing phenomena of learning and memory, not the internal mechanisms. But even this stuff is really interesting.

Edit, 10-28-10: Esquire has a particularly readable piece about how modern neuroscience research got its start. The point it makes is that, historically, we've been able to decipher basic brain region function by looking at what happens when that region gets damaged, through injury or surgery.

In 1848, an explosion drives a steel tamping bar through the skull of a twenty-five-year-old railroad foreman named Phineas Gage, obliterating a portion of his frontal lobes. He recovers, and seems to possess all his earlier faculties, with one exception: The formerly mild-mannered Gage is now something of a hellion, an impulsive shit-starter. Ipso facto, the frontal lobes must play some function in regulating and restraining our more animalistic instincts.

In 1861, a French neurosurgeon named Pierre-Paul Broca announces that he has found the root of speech articulation in the brain. He bases his discovery on a patient of his, a man with damage to the left hemisphere of his inferior frontal lobe. The man comes to be known as "Monsieur Tan," because, though he can understand what people say, "tan" is the only syllable he is capable of pronouncing.

Thirteen years later, Carl Wernicke, a German neurologist, describes a patient with damage to his posterior left temporal lobe, a man who speaks fluently but completely nonsensically, unable to form a logical sentence or understand the sentences of others. If "Broca's area," as the damaged part of Monsieur Tan's brain came to be known, was responsible for speech articulation, then "Wernicke's area" must be responsible for language comprehension.

And so it goes. The broken illuminate the unbroken.


Edit, 5-25-11: There's been some interesting research on using brain stimulation to aid learning: essentially using tiny amounts of electricity to induce changes in rats' brains that makethem better learners. After the current is shut off, the rats' brains go back to normal but they keep their learned skills. We don't know what the specific trade-offs may be, but between this approach and approaches which could mimic developmental neuroplasticity triggers, we may have the basis for a very desirable form of cognitive enhancement.

Here's "Scienceblog" on the a theory on how the brain picks which of its neural networks to use for a new skill:

The study by Reed and colleagues supports a theory that large-scale brain changes are not directly responsible for learning, but accelerate learning by creating an expanded pool of neurons from which the brain can select the most efficient, small “network” to accomplish the new skill.

This new view of the brain can be compared to an economy or an ecosystem, rather than a computer, Reed said. Computer networks are designed by engineers and operate using a finite set of rules and solutions to solve problems. The brain, like other natural systems, works by trial and error.

The first step of learning is to create a large set of diverse neurons that are activated by doing the new skill. The second step is to identify a small subset of neurons that can accomplish the necessary computation and return the rest of the neurons to their previous state, so they can be used to learn the next new skill.

By the end of a long period of training, skilled performance is accomplished by small numbers of specialized neurons not by large-scale reorganization of the brain. This research helps explain how brains can learn new skills without interfering with earlier learning.

Edit, 7-28-11: Scientists have traced the recall of a specific memory and found it partially activates other memories from around the same time. Unsurprising, given it's common to experience memories as strongly linked, but still good science, and perhaps it supports the viewpoint that all memory is ultimately episodic in some real sense.

Researchers have long known that the brain links all kinds of new facts, related or not, when they are learned about the same time. Just as the taste of a cookie and tea can start a cascade of childhood memories, as in Proust, so a recalled bit of history homework can bring to mind a math problem — or a new dessert — from that same night.

For the first time, scientists have recorded traces in the brain of that kind of contextual memory, the ever-shifting kaleidoscope of thoughts and emotions that surrounds every piece of newly learned information. The recordings, taken from the brains of people awaiting surgery for epilepsy, suggest that new memories of even abstract facts — an Italian verb, for example — are encoded in a brain-cell firing sequence that also contains information about what else was happening during and just before the memory was formed, whether a tropical daydream or frustration with the Mets.

The new study suggests that memory is like a streaming video that is bookmarked, both consciously and subconsciously, by facts, scenes, characters and thoughts.

...

“When you activate one memory, you are reactivating a little bit of what was happening around the time the memory was formed,” Dr. Kahana said[.]

8.15.2010

SS2010 Highlights: Day 1

SS2010 Highlights:

Day 1: The Future of Human Evolution

Michael Vassar: The Darwinian Method
A solid talk about the scientific method and rationality. People can be rational without being scientific; good organizational structures can protect against bias (but may be being eroded by the internet conjoining universities); there are different types of scientific method ("Scholarly science" (scholarly consensus) vs "Enlightenment science" (testing)); the Scientific Method is amazing because non-geniuses can still contribute to scientific progress. No big surprises, but a good kickoff. Vassar seems pretty familiar with philosophy.

Gregory Stock: Evolution of Post-Human Intelligence
A light talk about the future, progress, and evolution. Interesting points were
1. when Stock veered off and talked about his company, Signum Biosciences. They have an alzheimer's drug just entering human trials based on compounds in coffee.
2. Stock posed the question, "Why would love or human values survive?" -- presumably in AIs or post-human intelligences these things would be competitive handicaps and the ones burdened by human values would die out. I think it's a good point. Perhaps the point could be extended to any intelligence driven by emotion. Or perhaps even consciousness itself, given that something could somehow be intelligent yet nonconscious. Is the future owned by anhedonic, zombie AI?

Ray Kurzweil: The Mind and How to Build One
He teleconferenced in from vacation (boo). Nothing really new, but he has clearly spent a lot of time thinking about reverse engineering the principles the brain uses. According to Kurzweil, the spacial resolution of our brain imaging doubles every year. Talked some about simulation progress projections (Markham of the Blue Brain Project says 2018, Kurzweil says late 2020s to fully simulate a human brain). Interesting points included that we've basically completely reverse engineered the wiring of the cerebellum (essentially the same neuron structure is repeated 10 billion times); we're working on the cerebral cortex, and though it's a lot more complex, we're learning about its datastructures (it functions much like LISP's linked lists). Likewise, we've deciphered that vision is essentially an amalgamation of 7 different low-resolution information streams. Progress.

A big problem in brain simulation, which Kurzweil mentioned, and Goertzel brought up when I spoke with him, is training brain simulations. Training will not only be difficult, but simulations will need to be trained before we can evaluate how good they are-- and even if we can raise them at 20x speed, it'll still take a year before we know enough to tell much.

Ben Goertzel: AI Against Aging
Goertzel's dream is a computer that can do biology better than people can. We're a long way off. He's using 'narrow' AI programs in order to narrow down promising drug targets from thousands to dozens, specifically in the context of longevity compounds. Smart datamining.

His view on the etymology of aging (contra de Grey):
Cross-species data analysis strongly suggests that most age-associated disease and death is due to "antagonistic pleiotropy" -- destructive interference between adaptations specialized for different age ranges. The result is that death rate increases through old age, and then stabilizes at a high constant rate in late life.

Steven Mann: Humanistic Intelligence Augmentation and Mediation
Mann is known as the "first cyborg". Very into wearable computing. Wears a camera on his head and records everything. VR overlay capacity (he calls it "mediated reality"). Interesting in the context of technologies like Layar for the iPhone. Also had designed a water-based instrument (to fill in the orchestral gap between the "solid" instruments like drums and strings, and "air" instruments like woodwinds and brass).

Mandayam Srinivasan: Enhancing our bodies and evolving our brains
The father of haptic (touch feedback) technology. Talked about different 'levels' of haptic technologies-- everything from using haptic tech to interact with digital objects, to perhaps brain-computer interfaces where our brains grow our sense of self to encompass an artificial prothesis (a third arm, say). Bottom line: the brain and our sense of self are very plastic, particularly given a feedback mechanism.

Brian Litt: The past, present and future of brain machine interfaces
Probably my favorite talk. Very grounded and accessible, but with speculative undertones. Talked about the neuroscience and engineering difficulties of BCIs. I'm posting some excerpts, because his talk was very content-rich:

different types of BCIs-
- one way vs two way (open or closed loop)
- invasiveness (non, partial, very) (influences bandwidth)
- spacial scale (topology, degrees of freedom)
- temporal scale (precision)

levels of organization- where to interact with the brain?
- neuron
- cortical column
- nuclei
- functional networks
- cortical regions

afferent BCIs (inject a signal)
- map the network
- choose 'connection' site
- inject a signal (MUST contain information)
- "neuroplasticity" helps interprets over time
- performance = f (information quality, accessibility, bandwidth…)

efferent BCIs (find signal, take it out)
- map the network
- find a recording site
- transduce a signal
- algorithms 'interpret'
- 'neuroplasticity' (but you get less help from the brain going out than going in)
- performance=f (resolution, signal quality, algorithms, information)

major challenges in BCIs:
data dimensionality
data rates- up to 25 bits/min in 2000 (almost double now)
biocompatability
tissue/electrode interface
mapping circuits for meaningful injection/extraction points

state of the art for electrodes is bad…
12 million neurons gets represented by 1 electrode. Likewise, electrodes don't measure the same neurons during different experiments.

Litt also talked about the technology behind coclear implants and a bit about vision implants. The state of the art in coclear implants is 22 1-dimensional channels, and a lot of useful information can be packed into this datastream if some audio filters and harmonic extractions are performed on the original sound.

I was curious how plastic Litt thought brain structure was-- e.g., if you hooked up a coclear implant system to the visual nerves, would you get sonar? He seemed sympathetic to this idea in correspondence. More speculatively, I found myself wondering whether there's any reason to believe we could coax the brain into productively utilizing and relocating function to a digital "third hemisphere" type prothesis?

Demis Hassabis: Combining systems neuroscience and machine learning: a new approach to AGI
Hassabis essentially made the argument that there are three main ways to approach to modeling intelligence, and that only two of these niches are being filled. He calls his third approach "systems neuroscience".

Marr, whom Hassabis refers to as the "father of computational neuroscience," identifies three levels of analyzing complex biological systems:
- computational - defining goals of the system (e.g., Opencog)
- algorithmic - how the brain does things - the representations and algorithms (this guy)
- implementation - the medium - the physical realization of the system (e.g., Blue Brain, SyNAPSE)

So there are productive opportunities for people to try to reverse-engineer then formalize and/or reuse the brain's algorithms.

Hassabis also broke knowledge up into three categories that may(?) roughly correspond to these three levels: perceptual, conceptual, and symbolic. Analysis of perceptual knowledge is serviced by tools such as DBN, HMAX, HTM. Symbolic knowledge is serviced by logic networks. Conceptual knowledge is, according to H, not very well serviced.

It was an interesting talk, and I may need to watch it a second time to organize it into a coherent narrative. It felt content-rich and smart but somewhat conceptually disjoint.

Terry Sejnowski: Reverse-engineering brains is within reach
A smart but content-lite talk about extracting computational principles from the brain. More of a set-up to the debate than anything. Noted that our models of the brain come in many levels of abstraction, and progress in reverse-engineering brains will involve connecting these different maps (CNS, Systems, Maps, Networks, Neurons, Synapses, Molecules).

Dennis Bray: What Cells Can Do That Robots Can't
A smart but impenetrably dense survey of some complexities of cellular operation. Bray was brought in as a skeptical voice of the Old Guard Biological Establishment.

"As a card-carrying cell biologist, my loyalty lies with the carbon-based systems."

His focus seemed to be that cells are incredibly complex (there are 10^12 protein molecules in each of our cells) and impressively adaptable. It'd be very difficult to model the complexity or replace the adaptability, and the two are tightly linked.

Sejnowski/Bray debate: Will we soon realistically emulate biological systems?
An extremely polite and tame cagematch between Sejnowski and Bray. They seemed to converge on the idea that in principle, we could emulate biological systems-- but our current models are Very Far from being realistic simulations. Sejnowski exhibited some tools (MCell, a monte carlo simulator of microphysiology) which apparently do a good job at modeling certain aspects of cell biology. Bray held out for full simulation, relating that Francis Crick once told him, "explanations at the molecular level have a unique power, because they can be verified in so many ways."

One important point that emerged was cells are not stateless-- certain kinds of memory are embedded in epigenetic memory, which have been shown to help restore memories in a damaged brain. In general, cells have a significant amount of memory/learning that help them predict future conditions and prime them for future behavior… to a significant extent, to get realistic neural behavior, presumably this memory will need to be modeled.

The short version: nobody knows how simple a model of a neuron we can get away with to realistically emulate a brain. However, it's safe to say we're not there yet.


All in all, a very interesting day. Not as many superstars as last year, and not a ton of diversity in topic (no 3d printing, no economics, etc), but a lot of good things were thought and said.

8.13.2010

SS2010

I'll be at the Singularity Summit this weekend in San Francisco. Look for M. Edward Johnson. I'll also have rocking sideburns.

11.15.2009

Toward a new ontology of brain dynamics: neural resonance + neuroacoustics

Part 3 of my series. I think this is an important idea.

Part 1: Neurobiology, psychology, and the missing link(s)
Part 2: Gene Expression as a comprehensive diagnostic platform
Part 3: Neural resonance + neuroacoustics
Part 4: Location, location, location!

The brain is extraordinarily complex. We are in desperate need of models that decode this complexity and allow us to speak about the brain's fundamental dynamics simply, comprehensively, and predictively. I believe I have one, and it revolves around resonance.

Neural resonance is currently an underdefined curiosity at the fringes of respectable neuroscience research. I believe that over the next 10 years it'll grow into being a central part of the vocabulary of functional neuroscience. I could be wrong- but here's the what and why.

Resonance, in a nutshell

To back up a bit and situate the concept of resonance, consider how we create music. Every single one of our non-electronic musical instruments operate via resonance-- e.g., by changing fingering on a trumpet or flute, or moving a trombone slide to a different position, we change which frequencies resonate within the instrument. And when we blow into the mouthpiece we produce a messy range of frequencies, but of those, our instrument's physical parameters amplify a very select set of frequencies and dampen the rest, and out comes a clear, musical tone. Singing works similarly: we change the physical shape of our voiceboxes, throats, and mouths in order to make certain frequencies resonate and others not.

Put simply, resonance involves the tendency of systems to emphasize certain frequencies or patterns at the expense of others, based on the system's structural properties (what we call "acoustics"). It creates a rich, mathematically elegant sort of order, from a jumbled, chaotic starting point. We model and quantify resonance and acoustics in terms of waves, frequencies, harmonics, constructive and destructive interference, and the properties of systems which support or dampen certain frequencies.

So what is neural resonance?

Literally, 'resonance which happens in the context of the brain and neurons', or the phenomenon where the brain's 'acoustics' prioritizes certain patterns, frequencies, and harmonics of neural firings over others.

Examples would include a catchy snippet of music or a striking image that gets stuck in a one's head, with the neural firing patterns that represent these snippets echoing or 'resonating' inside the brain in some fashion for hours on end.[1] Similarly, though ideas enter the brain differently, they often get stuck, or "resonate," as well-- see, for instance, Dawkins on memes. In short, neural resonance is the tendency for some patterns in the brain (ideas) to persist more strongly than others, due to the mathematical interactions between the patterns of neural firings into which perceptions and ideas are encoded, and the 'acoustic' properties of the brain itself.

But if we want to take the concept of neural resonance as more than a surface curiosity-- as I think we should-- we can make a deeper analogy to the dynamics of resonant and acoustic systems by modeling information as actually resonating in the brain. That there are deep, rich, functionally significant, and semi-literal parallels between many aspects of brain dynamics and audio theory. Just like sound resonates in and is shaped by a musical instrument, ideas enter, resonate in, are shaped by, and ultimately leave their mark on our brains.

I thought the brain was a computer, not a collection of resonant chambers?

Yes; I'm essentially arguing that the brain computes via resonance and essentially acoustical mechanics.

So what is this resonance theory, specifically?

I'm basically arguing that we should try to semi-literally adapt the equations we've developed for sound and music to the neural context, and that most neural phenomena can be explained pretty darn well in terms of these equations. In short:

The brain functions as a set of connected acoustic chambers. We can think of it as a multi-part building, with each room tuned to make a slightly different harmonies resonate, and with doors opening and closing all the time so these harmonies constantly mix. (Sometimes tones carry through the walls to adjacent rooms.) The harmonies are thoughts; the 'rooms' are brain regions.

Importantly, the transformations which brain regions apply to thoughts are akin to the transformations a specific room would apply to a certain harmony. The acoustics of the room-- i.e., the 'resonant properties' of a brain region-- profoundly influence the pattern occupying it. The essence of thinking, then, is letting these patterns enter our brain regions and resonate/refine themselves until they ring true.

My basic argument is that you can explain basically every important neural dynamic within the brain in terms of resonance, that it's a comprehensive, generative, and predictive model-- much moreso than current 'circuit' or 'voting' based analogies.

Here are some neural phenomena contextualized in terms of resonance:

- Sensory preprocessing filters: as information enters the brain, it's encoded into highly time-dependent waves of neural discharges. The 'neuroacoustic' properties of the brain, or which kinds of wave-patterns are naturally amplified (i.e., resonate) or dampened by properties of the neural networks relaying this pattern, act as a built-in, 'free' signal filter. For instance, much of the function of the visual and audio cortexes emerges from the sorts of patterns which they amplify or dampen.

- Competition for neural resources: much of the dynamics of the brain centers around thoughts and emotions competing for neural resources, and one of the central challenges of models purporting to describe neural function is to provide a well-defined fitness condition for this competition. Under the neural resonance / neuroacoustics model, this is very straightforward: patterns which resonate in the brain acquire more and better maintain resources (territory) than those that resonate less well.

- What happens when we're building an idea: certain types of deliberative or creative thinking may be analogous to tweaking a neural pattern's profile such that it resonates better.

- How ideas can literally collide: if two neural patterns converge inside a brain region, one of several overlapping things may occur: one resonates more dominantly and swamps the other, destructive interference, constructive interference, or a new idea emerges directly from the wave interference pattern.

- How ideas change us: since neural activity is highly conditioned, patterns which resonate more change more neural connections. I.e., the more a thought, emotion, or even snippet of music persists in resonating and causing neurons to fire in the same pattern, the more it leaves its mark on the brain. Presumably, having a certain type of resonance occur in the brain primes the brain's neuroacoustics to make patterns like it more likely to resonate in the future (see, for instance, sensitization aka kindling).[2] You become what resonates within you.

In short, resonance, or the tendency for certain neural firing patterns to persist due to how their frequency- and wave-related properties interact with the features of the brain and each other, is a significant factor in the dynamics of how the brain filters, processes, and combines signals. However, we should also keep in mind that:


Resonance in the brain is an inherently dynamic property because the brain actively manages its neuroacoustics!

I've argued above that our 'neuroacoustics'- that which determines what sorts of patterns resonate in our heads and get deeply ingrained in our neural nets- is important and actively shapes what goes on in our heads. But this is just half the story: we can't get from static neuroacoustic properties to a fully-functioning brain, since, if nothing else, resonant patterns would get stuck. The other, equally important half is that the brain has the ability to contextually amplify, dampen, filter, and in general manage its neural resonances, or in other words contextually shape its neuroacoustics.

Some of the logic of this management may be encoded into regional topologies and intrinsic properties of neuron activation, but I'd estimate that the majority (perhaps 80%) of neuroacoustic management occurs via the contextual release of specific neurotransmitters, and in fact this could be said to be their central task.

With regard to what manages the managers, presumably neurotransmitter release could be tightly coupled with the current resonance activity in various brain regions, but the story of serotonin, dopamine, and norepinephrine may be somewhat complicated as it's unclear how much of neurotransmitter activity is a stateless, walkforward process. The brain's metamanagement may be a phenomenon resistant to simple rules and generalities.

A key point regarding the brain managing its neuroacoustics is that how good the brain is at doing so likely varies significantly between individuals, and this variance may be at the core of many mental phenomena. For instance:

- That which distinguishes both gifted learners and high-IQ individuals from the general populance may be that their brains are more flexible in manipulating their neuroacoustic properties to resonate better to new concepts and abstract situations, respectively. Capacity for empathy may be shorthand for 'ability to accurately simulate or mirror other peoples' neuroacoustic properties'.

- Likewise, malfunctions and gaps in the brain’s ability to manage its neural resonance, particularly in matching up the proper neuroacoustic properties to a given situation, may be a large part of the story of mental illness and social dysfunction. Autism spectrum disorders for instance, may be almost entirely caused by malfunctions in the brain's ability to regulate its neuroacoustic properties.

One lever the brain could be using to manage its neuroacoustics is the ability to pick which regions a thought pattern is allowed to resonate in. A single region vs multiple, regions with properties of X rather than of Y, etc. Another lever is changing the neuroacoustic properties within a region. Yet another lever is changing the effective "acoustic filter" properties inherent in connections between brain regions-- thoughts will necessarily be filtered and streamlined as they leave one region and enter another, but perhaps the way they are filtered can be changed. It's unclear how the brain might use each of these neuroacoustic management techniques depending on the situation, but I would be surprised if the brain didn't utilize all three.

Further implications:

- If we can exercise and improve the brain's ability to manage its neural resonance (perhaps with neurofeedback?), all of these things (IQ, ability to learn, mental health, social dexterity) should improve.

- Mood may be another word for neuroacoustic configuration. A change in mood implies a change in which ideas resonate in ones mind. Maintaining a thought or emotion means maintaining one's neuroacoustic configuration. (See addendum on chord structures and Depression.)

- 'Prefrontal biasing', or activity in the prefrontal cortex altering competitive dynamics in the brain, may be viewed in terms of resonance: put simply, the analogy is that the PFC is located at a leveraged acoustic position (e.g., the tuning pegs of a guitar) and has a strong influence on the resonant properties of many other regions.

- Phenomena such as migraines may essentially be malfunctions in the brain's neuroacoustic management. A runaway resonance.

- I'm hopeful that we should be able to derive a priori things such as the 'Big Five' personality dimensions from simple differences in the brain's stochastic neuroacoustic properties and neuroacoustic management.

The story thus far:

So, that's an outline of a resonance / neuroacoustics model of the brain. In short, many brain phenomena are inherently based on resonance, and differences in many of the mental attributes we care about-- intelligence, empathy, mood, and so on-- are a result of the brain's ability (or lack thereof) to appropriately regulate its own neuroacoustic configuration.

Discussion:

Now, the natural question with a theory such as this is, 'is this a just-so story?' The evidence that would support or falsify this model is still out, and our methods of analyzing brain function in terms of frequency and firing patterns are still very rudimentary, but the model does seem to explain/predict the following:

- What cognition 'is';
- How competition for neural resources is resolved;
- How complex decision-making abilities may arise from simple neural properties;
- How ideas may interact with each other within the brain;
- That audio theory may be a rich source of starting points for equations to model information dynamics in the brain;
- What the maintenance of thought and emotion entails, and why a change in mood implies a change in thinking style;
- How subconscious thought may(?) be processed;
- What intelligence is, and how there could be different kinds of intelligence;
- How various disorders may naturally arise from a central process of the brain (and that they are linked, and perhaps can be improved by a special kind of brain exercise);
- The division of function between neurons and neurotransmitters;
- The mechanism by which memes can be 'catchy' and how being exposed to memes can create a 'resonant beachhead' for similar memes;
- The mechanism of how neurofeedback can/should be broadly effective.

There are few holistic theories of brain function which cover half this ground.

Tests which have the ability to falsify or support models of neural function (such as this one) aren't available now, but may arise as we get better at simulating brains and such. I look forward to that-- it would certainly be helpful to be able to more precisely quantify things such as neural resonance, neuroacoustics, interference patterns within the brain, and such.


Closing thoughts:

As George Box famously said, 'all models are wrong, but some are useful.' This model certainly doesn't get everything right, and to some extent (just like its competitors) it is a just-so story-- but I think it's got at least three things going for it over similar models:
1. Fundamental simplicity-- it's one of the few models of neural function which can actually provide an intuitive answer to the question of what’s going on in someone brain.
2. Emergent complexity-- from a small handful of concepts (or just one, depending on how you count it), the elegant complexity of neural dynamics emerges.
3. Ideal level of abstraction-- this is a model which we can work both downward from to e.g., use as a sanity check for neural simulation since the resonant properties of neural networks are tied to function (the Blue Brain project is doing this to some extent), and upward from to generate new explanations/predictions within psychology, since resonance appears to be a central and variable element of high-speed neural dynamics and the formation and maintenance of thought and emotion.

If it's a good, meaningful model, we should be able to generate novel hypotheses to test. I have outlined some in my description above (e.g., that many, diverse mental phenomena are based on the brain's ability to manage its neural resonance, and if we approve this in one regard it should have significant spillover). There will be more. I wish I had the resources to generate and test specific hypotheses arising from this model.

ETA 10 years.


Footnotes and musings:

[1] As a rule, music resonates very easily in brains. Moreover though, there's a great deal of variation in which types of music resonate in different peoples' brains. I have a vague but insistent suspicion that analyzing who finds which kinds of music 'catchy' can be extrapolated to understand at least some of the contours of what general types of things peoples' brains resonate to. I.e., music seems to lend itself toward 'typing' neural resonance contours.

[2] The brain's emotive and cognitive machinery are so tightly linked-- there's support from the literature to say no real distinction exists-- that a huge question mark is how the resonance of thoughts and emotions coexist and interact. It's safe to say that the brain's resources are finite such that, all being equal, the presence of strong emotions reduces capacity for abstract cognition and general processing. But does the relationship go beyond this? Can we also say that having certain emotions resonate 'sensitizes' or optimizes the brain for certain types of cognition? Music is perhaps the most powerful and consistent harbinger of emotion; does listening to music 'sensitize' or 'prime' the brain's resonant properties in the same way as raw emotions might? Are we performing a mass neurodynamics experiment on society with e.g., all the rap or emo pop music out there? How could we even attempt to characterize these hypothetical stochastic changes in average neural resonance profiles?


- Resonance is about the reinforcement of frequencies. So what specific frequencies might we be dealing with here?

It's hard to say for sure, since we have no robust (or even fragile) way of tracking information as it enters and makes its way through the brain. With no way to track or identify information, we can't give a confident answer to this (and so many other questions).

But a priori, as a first approximation, I would suggest:
(1) The frequencies of previously-identified 'brainwaves' (alpha, delta, gamma, etc) may be relevant to information encoding mechanics (or, alternatively, to neural competition dynamics);
(2) If we model a neural region as a fairly self-contained resonant chamber (with limited but functionally significant leakage), the time it takes a neural signal, following an 'average' neural path, to get to the opposite edge of the chamber and return will be a key property of the system. (Sound travels in fairly straight lines; neural "waves" do not. This sort of analysis will be non-trivial, and will perhaps need to be divorced from a strict spacial interpretation. And we may need to account for chemical communication.) Each brain region has a slightly different profile in this regard, and this may help shape what sorts of information come to live in each brain region.

Addendum, 10-11-10: Chord Structures

Major chords are emotively associated with contentment; minor chords with tragedy. If my resonance analogy is correct, there may be a tight, deeply structural analogy between musical theory, emotion, and neural resonance. I.e., musical chords are mathematically homologous to patterns of neural resonance, wherein major and minor forms exist and are almost always associated with positive and negative affect, respectively.

Now, it's not clear whether there's an elegant, semi-literal correspondence between e.g., minor chords, "minor key" neural resonances, and negative affect. There could be three scenarios:

1. No meaningful correspondence exists.
2. There isn't an elegant mathematical parallel between e.g., the structure of minor chords and patterns of activity which produce negative affect in the brain, but within the brain we can still categorize patterns as producing positive or negative affect based on their 'chord' structure.
3. Musical chords are deeply structurally analogous to patterns of neural resonance, in that e.g., a minor chord has a certain necessary internal mathematical structure that is replicated in all neural patterns that have a negative affect.

The answer is not yet clear. But I think that the incredible sensitivity we have to minute changes in musical structure- and the ability of music to so profoundly influence our mood- is evidence of (3), that musical chords and the structure of patterns of neural impulses are deeply analogous, and knowledge from one domain may elegantly apply to the other. We're at a loss as to how and why humans invented music; it's much less puzzling if it's a relatively elegant (though simplified) expression of what's actually going on in our heads. Music may be an admittedly primitive but exceedingly well-developed expression of neuro-ontology, hiding in front of our noses.

How do we prove this?

Correlating thought structure with affect is a Hard problem, mostly because isolating a single 'thought' within the multidimensional cacophony of the brain is very difficult. There has been some limited progress with inputting a 'trackable signal' of very specific parameters (e.g., a 22hz pulsed light, or a 720hz audio wave) and tracing this through sensory circuits until it vanishes from view. There's a lot of work going on to make this an easier problem. Ultimately we'd be drawing upon the mathematical structure of musical chords and looking for abstract, structural similarities with patterns of neural firings, and attempting to correlate positive and negative affect with these patterns.

The bottom line:

If this chord structure hypothesis is even partly true, it (along with parts of music theory) could form the basis for a holy grail of neuroscience, a systems model of emotional affect. E.g., Depression could be partly but usefully characterized in terms of a brain's resonance literally being tuned to a minor key.