11.15.2009

Toward a new ontology of brain dynamics: neural resonance + neuroacoustics

Part 3 of my series. I think this is an important idea.

Part 1: Neurobiology, psychology, and the missing link(s)
Part 2: Gene Expression as a comprehensive diagnostic platform
Part 3: Neural resonance + neuroacoustics
Part 4: Location, location, location!

The brain is extraordinarily complex. We are in desperate need of models that decode this complexity and allow us to speak about the brain's fundamental dynamics simply, comprehensively, and predictively. I believe I have one, and it revolves around resonance.

Neural resonance is currently an underdefined curiosity at the fringes of respectable neuroscience research. I believe that over the next 10 years it'll grow into being a central part of the vocabulary of functional neuroscience. I could be wrong- but here's the what and why.

Resonance, in a nutshell

To back up a bit and situate the concept of resonance, consider how we create music. Every single one of our non-electronic musical instruments operate via resonance-- e.g., by changing fingering on a trumpet or flute, or moving a trombone slide to a different position, we change which frequencies resonate within the instrument. And when we blow into the mouthpiece we produce a messy range of frequencies, but of those, our instrument's physical parameters amplify a very select set of frequencies and dampen the rest, and out comes a clear, musical tone. Singing works similarly: we change the physical shape of our voiceboxes, throats, and mouths in order to make certain frequencies resonate and others not.

Put simply, resonance involves the tendency of systems to emphasize certain frequencies or patterns at the expense of others, based on the system's structural properties (what we call "acoustics"). It creates a rich, mathematically elegant sort of order, from a jumbled, chaotic starting point. We model and quantify resonance and acoustics in terms of waves, frequencies, harmonics, constructive and destructive interference, and the properties of systems which support or dampen certain frequencies.

So what is neural resonance?

Literally, 'resonance which happens in the context of the brain and neurons', or the phenomenon where the brain's 'acoustics' prioritizes certain patterns, frequencies, and harmonics of neural firings over others.

Examples would include a catchy snippet of music or a striking image that gets stuck in a one's head, with the neural firing patterns that represent these snippets echoing or 'resonating' inside the brain in some fashion for hours on end.[1] Similarly, though ideas enter the brain differently, they often get stuck, or "resonate," as well-- see, for instance, Dawkins on memes. In short, neural resonance is the tendency for some patterns in the brain (ideas) to persist more strongly than others, due to the mathematical interactions between the patterns of neural firings into which perceptions and ideas are encoded, and the 'acoustic' properties of the brain itself.

But if we want to take the concept of neural resonance as more than a surface curiosity-- as I think we should-- we can make a deeper analogy to the dynamics of resonant and acoustic systems by modeling information as actually resonating in the brain. That there are deep, rich, functionally significant, and semi-literal parallels between many aspects of brain dynamics and audio theory. Just like sound resonates in and is shaped by a musical instrument, ideas enter, resonate in, are shaped by, and ultimately leave their mark on our brains.

I thought the brain was a computer, not a collection of resonant chambers?

Yes; I'm essentially arguing that the brain computes via resonance and essentially acoustical mechanics.

So what is this resonance theory, specifically?

I'm basically arguing that we should try to semi-literally adapt the equations we've developed for sound and music to the neural context, and that most neural phenomena can be explained pretty darn well in terms of these equations. In short:

The brain functions as a set of connected acoustic chambers. We can think of it as a multi-part building, with each room tuned to make a slightly different harmonies resonate, and with doors opening and closing all the time so these harmonies constantly mix. (Sometimes tones carry through the walls to adjacent rooms.) The harmonies are thoughts; the 'rooms' are brain regions.

Importantly, the transformations which brain regions apply to thoughts are akin to the transformations a specific room would apply to a certain harmony. The acoustics of the room-- i.e., the 'resonant properties' of a brain region-- profoundly influence the pattern occupying it. The essence of thinking, then, is letting these patterns enter our brain regions and resonate/refine themselves until they ring true.

My basic argument is that you can explain basically every important neural dynamic within the brain in terms of resonance, that it's a comprehensive, generative, and predictive model-- much moreso than current 'circuit' or 'voting' based analogies.

Here are some neural phenomena contextualized in terms of resonance:

- Sensory preprocessing filters: as information enters the brain, it's encoded into highly time-dependent waves of neural discharges. The 'neuroacoustic' properties of the brain, or which kinds of wave-patterns are naturally amplified (i.e., resonate) or dampened by properties of the neural networks relaying this pattern, act as a built-in, 'free' signal filter. For instance, much of the function of the visual and audio cortexes emerges from the sorts of patterns which they amplify or dampen.

- Competition for neural resources: much of the dynamics of the brain centers around thoughts and emotions competing for neural resources, and one of the central challenges of models purporting to describe neural function is to provide a well-defined fitness condition for this competition. Under the neural resonance / neuroacoustics model, this is very straightforward: patterns which resonate in the brain acquire more and better maintain resources (territory) than those that resonate less well.

- What happens when we're building an idea: certain types of deliberative or creative thinking may be analogous to tweaking a neural pattern's profile such that it resonates better.

- How ideas can literally collide: if two neural patterns converge inside a brain region, one of several overlapping things may occur: one resonates more dominantly and swamps the other, destructive interference, constructive interference, or a new idea emerges directly from the wave interference pattern.

- How ideas change us: since neural activity is highly conditioned, patterns which resonate more change more neural connections. I.e., the more a thought, emotion, or even snippet of music persists in resonating and causing neurons to fire in the same pattern, the more it leaves its mark on the brain. Presumably, having a certain type of resonance occur in the brain primes the brain's neuroacoustics to make patterns like it more likely to resonate in the future (see, for instance, sensitization aka kindling).[2] You become what resonates within you.

In short, resonance, or the tendency for certain neural firing patterns to persist due to how their frequency- and wave-related properties interact with the features of the brain and each other, is a significant factor in the dynamics of how the brain filters, processes, and combines signals. However, we should also keep in mind that:


Resonance in the brain is an inherently dynamic property because the brain actively manages its neuroacoustics!

I've argued above that our 'neuroacoustics'- that which determines what sorts of patterns resonate in our heads and get deeply ingrained in our neural nets- is important and actively shapes what goes on in our heads. But this is just half the story: we can't get from static neuroacoustic properties to a fully-functioning brain, since, if nothing else, resonant patterns would get stuck. The other, equally important half is that the brain has the ability to contextually amplify, dampen, filter, and in general manage its neural resonances, or in other words contextually shape its neuroacoustics.

Some of the logic of this management may be encoded into regional topologies and intrinsic properties of neuron activation, but I'd estimate that the majority (perhaps 80%) of neuroacoustic management occurs via the contextual release of specific neurotransmitters, and in fact this could be said to be their central task.

With regard to what manages the managers, presumably neurotransmitter release could be tightly coupled with the current resonance activity in various brain regions, but the story of serotonin, dopamine, and norepinephrine may be somewhat complicated as it's unclear how much of neurotransmitter activity is a stateless, walkforward process. The brain's metamanagement may be a phenomenon resistant to simple rules and generalities.

A key point regarding the brain managing its neuroacoustics is that how good the brain is at doing so likely varies significantly between individuals, and this variance may be at the core of many mental phenomena. For instance:

- That which distinguishes both gifted learners and high-IQ individuals from the general populance may be that their brains are more flexible in manipulating their neuroacoustic properties to resonate better to new concepts and abstract situations, respectively. Capacity for empathy may be shorthand for 'ability to accurately simulate or mirror other peoples' neuroacoustic properties'.

- Likewise, malfunctions and gaps in the brain’s ability to manage its neural resonance, particularly in matching up the proper neuroacoustic properties to a given situation, may be a large part of the story of mental illness and social dysfunction. Autism spectrum disorders for instance, may be almost entirely caused by malfunctions in the brain's ability to regulate its neuroacoustic properties.

One lever the brain could be using to manage its neuroacoustics is the ability to pick which regions a thought pattern is allowed to resonate in. A single region vs multiple, regions with properties of X rather than of Y, etc. Another lever is changing the neuroacoustic properties within a region. Yet another lever is changing the effective "acoustic filter" properties inherent in connections between brain regions-- thoughts will necessarily be filtered and streamlined as they leave one region and enter another, but perhaps the way they are filtered can be changed. It's unclear how the brain might use each of these neuroacoustic management techniques depending on the situation, but I would be surprised if the brain didn't utilize all three.

Further implications:

- If we can exercise and improve the brain's ability to manage its neural resonance (perhaps with neurofeedback?), all of these things (IQ, ability to learn, mental health, social dexterity) should improve.

- Mood may be another word for neuroacoustic configuration. A change in mood implies a change in which ideas resonate in ones mind. Maintaining a thought or emotion means maintaining one's neuroacoustic configuration. (See addendum on chord structures and Depression.)

- 'Prefrontal biasing', or activity in the prefrontal cortex altering competitive dynamics in the brain, may be viewed in terms of resonance: put simply, the analogy is that the PFC is located at a leveraged acoustic position (e.g., the tuning pegs of a guitar) and has a strong influence on the resonant properties of many other regions.

- Phenomena such as migraines may essentially be malfunctions in the brain's neuroacoustic management. A runaway resonance.

- I'm hopeful that we should be able to derive a priori things such as the 'Big Five' personality dimensions from simple differences in the brain's stochastic neuroacoustic properties and neuroacoustic management.

The story thus far:

So, that's an outline of a resonance / neuroacoustics model of the brain. In short, many brain phenomena are inherently based on resonance, and differences in many of the mental attributes we care about-- intelligence, empathy, mood, and so on-- are a result of the brain's ability (or lack thereof) to appropriately regulate its own neuroacoustic configuration.

Discussion:

Now, the natural question with a theory such as this is, 'is this a just-so story?' The evidence that would support or falsify this model is still out, and our methods of analyzing brain function in terms of frequency and firing patterns are still very rudimentary, but the model does seem to explain/predict the following:

- What cognition 'is';
- How competition for neural resources is resolved;
- How complex decision-making abilities may arise from simple neural properties;
- How ideas may interact with each other within the brain;
- That audio theory may be a rich source of starting points for equations to model information dynamics in the brain;
- What the maintenance of thought and emotion entails, and why a change in mood implies a change in thinking style;
- How subconscious thought may(?) be processed;
- What intelligence is, and how there could be different kinds of intelligence;
- How various disorders may naturally arise from a central process of the brain (and that they are linked, and perhaps can be improved by a special kind of brain exercise);
- The division of function between neurons and neurotransmitters;
- The mechanism by which memes can be 'catchy' and how being exposed to memes can create a 'resonant beachhead' for similar memes;
- The mechanism of how neurofeedback can/should be broadly effective.

There are few holistic theories of brain function which cover half this ground.

Tests which have the ability to falsify or support models of neural function (such as this one) aren't available now, but may arise as we get better at simulating brains and such. I look forward to that-- it would certainly be helpful to be able to more precisely quantify things such as neural resonance, neuroacoustics, interference patterns within the brain, and such.


Closing thoughts:

As George Box famously said, 'all models are wrong, but some are useful.' This model certainly doesn't get everything right, and to some extent (just like its competitors) it is a just-so story-- but I think it's got at least three things going for it over similar models:
1. Fundamental simplicity-- it's one of the few models of neural function which can actually provide an intuitive answer to the question of what’s going on in someone brain.
2. Emergent complexity-- from a small handful of concepts (or just one, depending on how you count it), the elegant complexity of neural dynamics emerges.
3. Ideal level of abstraction-- this is a model which we can work both downward from to e.g., use as a sanity check for neural simulation since the resonant properties of neural networks are tied to function (the Blue Brain project is doing this to some extent), and upward from to generate new explanations/predictions within psychology, since resonance appears to be a central and variable element of high-speed neural dynamics and the formation and maintenance of thought and emotion.

If it's a good, meaningful model, we should be able to generate novel hypotheses to test. I have outlined some in my description above (e.g., that many, diverse mental phenomena are based on the brain's ability to manage its neural resonance, and if we approve this in one regard it should have significant spillover). There will be more. I wish I had the resources to generate and test specific hypotheses arising from this model.

ETA 10 years.


Footnotes and musings:

[1] As a rule, music resonates very easily in brains. Moreover though, there's a great deal of variation in which types of music resonate in different peoples' brains. I have a vague but insistent suspicion that analyzing who finds which kinds of music 'catchy' can be extrapolated to understand at least some of the contours of what general types of things peoples' brains resonate to. I.e., music seems to lend itself toward 'typing' neural resonance contours.

[2] The brain's emotive and cognitive machinery are so tightly linked-- there's support from the literature to say no real distinction exists-- that a huge question mark is how the resonance of thoughts and emotions coexist and interact. It's safe to say that the brain's resources are finite such that, all being equal, the presence of strong emotions reduces capacity for abstract cognition and general processing. But does the relationship go beyond this? Can we also say that having certain emotions resonate 'sensitizes' or optimizes the brain for certain types of cognition? Music is perhaps the most powerful and consistent harbinger of emotion; does listening to music 'sensitize' or 'prime' the brain's resonant properties in the same way as raw emotions might? Are we performing a mass neurodynamics experiment on society with e.g., all the rap or emo pop music out there? How could we even attempt to characterize these hypothetical stochastic changes in average neural resonance profiles?


- Resonance is about the reinforcement of frequencies. So what specific frequencies might we be dealing with here?

It's hard to say for sure, since we have no robust (or even fragile) way of tracking information as it enters and makes its way through the brain. With no way to track or identify information, we can't give a confident answer to this (and so many other questions).

But a priori, as a first approximation, I would suggest:
(1) The frequencies of previously-identified 'brainwaves' (alpha, delta, gamma, etc) may be relevant to information encoding mechanics (or, alternatively, to neural competition dynamics);
(2) If we model a neural region as a fairly self-contained resonant chamber (with limited but functionally significant leakage), the time it takes a neural signal, following an 'average' neural path, to get to the opposite edge of the chamber and return will be a key property of the system. (Sound travels in fairly straight lines; neural "waves" do not. This sort of analysis will be non-trivial, and will perhaps need to be divorced from a strict spacial interpretation. And we may need to account for chemical communication.) Each brain region has a slightly different profile in this regard, and this may help shape what sorts of information come to live in each brain region.

Addendum, 10-11-10: Chord Structures

Major chords are emotively associated with contentment; minor chords with tragedy. If my resonance analogy is correct, there may be a tight, deeply structural analogy between musical theory, emotion, and neural resonance. I.e., musical chords are mathematically homologous to patterns of neural resonance, wherein major and minor forms exist and are almost always associated with positive and negative affect, respectively.

Now, it's not clear whether there's an elegant, semi-literal correspondence between e.g., minor chords, "minor key" neural resonances, and negative affect. There could be three scenarios:

1. No meaningful correspondence exists.
2. There isn't an elegant mathematical parallel between e.g., the structure of minor chords and patterns of activity which produce negative affect in the brain, but within the brain we can still categorize patterns as producing positive or negative affect based on their 'chord' structure.
3. Musical chords are deeply structurally analogous to patterns of neural resonance, in that e.g., a minor chord has a certain necessary internal mathematical structure that is replicated in all neural patterns that have a negative affect.

The answer is not yet clear. But I think that the incredible sensitivity we have to minute changes in musical structure- and the ability of music to so profoundly influence our mood- is evidence of (3), that musical chords and the structure of patterns of neural impulses are deeply analogous, and knowledge from one domain may elegantly apply to the other. We're at a loss as to how and why humans invented music; it's much less puzzling if it's a relatively elegant (though simplified) expression of what's actually going on in our heads. Music may be an admittedly primitive but exceedingly well-developed expression of neuro-ontology, hiding in front of our noses.

How do we prove this?

Correlating thought structure with affect is a Hard problem, mostly because isolating a single 'thought' within the multidimensional cacophony of the brain is very difficult. There has been some limited progress with inputting a 'trackable signal' of very specific parameters (e.g., a 22hz pulsed light, or a 720hz audio wave) and tracing this through sensory circuits until it vanishes from view. There's a lot of work going on to make this an easier problem. Ultimately we'd be drawing upon the mathematical structure of musical chords and looking for abstract, structural similarities with patterns of neural firings, and attempting to correlate positive and negative affect with these patterns.

The bottom line:

If this chord structure hypothesis is even partly true, it (along with parts of music theory) could form the basis for a holy grail of neuroscience, a systems model of emotional affect. E.g., Depression could be partly but usefully characterized in terms of a brain's resonance literally being tuned to a minor key.

11.14.2009

Prediction: Gene Expression as a comprehensive diagnostic platform

This is part 2 of my series on the brain, neuroscience, and medicine.

Part 1: Neurobiology, psychology, and the missing link(s)
Part 2: Gene Expression as a comprehensive diagnostic platform
Part 3: Neural resonance + neuroacoustics
Part 4: Location, location, location!

I have seen the future of medical diagnosis-- it's elegant, accurate, immediate, mostly doctor-less, comprehensive, and very computationally intensive. I don't know when it'll arrive, but it's racing toward us and when it hits, it'll change everything.

In short-- the future of medical diagnosis is to use a gene expression panel along with known functional and correlative connections between gene expression and pathology to perform thousands of parallel tests for every single human illness we know of-- no matter whether it's acute, chronic, pathogenic, mental, or lifestyle.

What do you mean? And how would it work?

The basis for using gene expression as a comprehensive diagnostic platform goes something like this:

- Gene expression is a measure of which (and to what extent) genes are being made into proteins and RNA. A gene expression test is much like a traditional genetic test, but since it goes beyond merely listing which genes your body has, and shows how much your body is using each one, it's a much better view of what's actually going on inside your body. Genes may be a blueprint of physiological potential-- but gene expression is a snapshot of physiological function.

- The Vast majority of illnesses leave a significant imprint on a person's gene expression. A failing kidney, an inflamed appendix, obesity, a manic episode-- these will influence which genes are activated, and in very specific ways. It's possible, and I think fairly probable, that the imprints distinct physiological insults leave on gene expression will themselves be fairly distinct, and so in theory we should be able to work backward from gene expression to physiological insult.

- Once we've gathered a Large collection of gene expression-known illness pairs (we could build this dataset by requiring e.g., hospitals to collect a gene expression sample when a diagnosis is made), we can start to train computers to identify what gene expression conditions are connected to each illness. Finding these sorts of connections is almost impossible for humans, but there exist computational approaches which in theory are fairly ideal.[1]

So in short, it won't be easy, but I think there's really nothing standing in the way of gene expression tests which use broad-spectrum correlative analysis to screen for all known illnesses at once.

I suspect the possibility of gene expression as a comprehensive diagnostic platform will start to become a "cool" thing for bioscience visionaries to yak about over the next 5 years. The large-scale data collection is, I think, the biggest hurdle, though finding solid correlations in a massive dataset against a background of variable application of diagnostic criteria is also non-trivial. But it's coming.


Challenges:
- Training computer programs (e.g., classification ANNs) requires a lot of good data, as does screening out false positives in a sample as wide as a full genome. Getting enough *good* samples where all the right diagnoses have been made will be challenging.
- Gene expression analysis is still having growing pains. E.g., "Protein sequencing gone awry: 1 sample, 27 labs, 20 results".
- Crunching the numbers on which of 22,000+ different genes (and potentially some non-protein-coding, RNA-producing genes) are correlated with each illness is far from a trivial problem.

Unknowns:
- Will gene expression from multiple locations be needed to diagnose some illnesses?
- It seems fairly safe to say we'll be able to diagnose e.g., kidney failure or malaria from gene expression data. But what about internal bleeding? And what about some of the more tricky or subjective mental illnesses? This technology will have its theoretical limits: what are they?

[1] This task falls outside the scope of this writeup, but just going with what I know, I'd take a set of gene expression-known illnesses pairs, divide the gene data up into smaller, more tractable pieces (perhaps along specific gene-network faultlines, perhaps randomly?), and train a classifier neural network on the pieces, which will attempt to predict a specific illness based on what it finds to be the most significant data in the subset. Layer these subset-based classifier neural networks under a 'master' classifier neural network which gives the final yes/no prediction. Test this model on progressively larger out-of-sample data sets. Repeat for each illness. There are undoubtedly solutions orders of magnitude better than this-- but it's a baseline start.

ETA 15-20 years.

Edit, 1-22-10: Based on the available information, I think gene expression is a strongly representative abstraction level from which to draw. However, I see strong arguments for also including the metabolome and metagenome if it's feasible to do so.

Edit, 6-22-10: My ETA may even be too conservative: a collection of researchers from various California universities recently published a method for using gene expression for diagnosis by associating arbitrary gene expression profiles with clustered sets of expression profiles with known diagnoses.

Edit, 6-12-11: This idea depends on the mid-term availability of incredibly cheap gene expression sequencing. I don't think this is unrealistic, given these sorts of trends (courtesy of genome.gov).

Edit, 6-29-11: Gene expression includes an incredible amount of context and nuance, which provides it with a significant advantage over the current (very imperfect) practice of using simple biomarkers.

11.12.2009

Neuro musings, part 1: neurobiology, psychology, and the missing link(s)

I'm flying out to Salt Lake City tomorrow for a month of thinking about neuroscience; I process ideas by writing, so I'm kicking off an open-ended series of pieces dealing with the stuff I'm thinking about.

Part 1: Neurobiology, psychology, and the missing link(s)
Part 2: Gene Expression as a comprehensive diagnostic platform
Part 3: Neural resonance + neuroacoustics
Part 4: Location, location, location!

The central problem of neuroscience is that despite all the advancements happening in medical science, we have embarrassingly few ways to quantify, or talk quantitatively about, mid-level functional differences between peoples' brains.

It's not that we have no tools at all for quantifying function and individual differences: we can draw correlations between specific genes and certain behavioral traits or neurophysiological features. We have the DSM IV (and soon, DSM V) as a sort of handbook on the symptoms of common brain-related problems. We have the Myers-Briggs and related personality-typing tests, we have psychometric tests, we have various scans that pick up gross neuroanatomy (and we can sometimes correlate this with behavioral deficits), and we have the fMRI, which can measure raw neural activity through the proxy of where blood flows in the brain.

The problem is that these methods of understanding brains are heavily clustered in two opposite areas: the reductionist neuroanatomical approach, which is great as far as it goes, but doesn't go far enough up the ladder of abstraction to explain much about everyday behavior, and the symptom-centric psychological approach, which may be a great description of how various people behave, or some common neural equilibria, but really explains very little.[1][2] There's a great deal of room in neuroscience for an ontology with which to talk about, and mid-level tools which attempt to measure and correlate things with, this underserved middle-level of brain function.[3]

Of course, the natural question regarding these mid-level approaches to understanding the brain is whether we can find ontologies and tools which can be said to "carve reality at its joints," or not be based on a terribly leaky level of abstraction (as, for example, the DSM IV fails at), yet have direct relevance to psychological events as we experience them in ourselves and in others (as, for example, the DSM IV does). I don't have any answers! But I do have ideas.

[1] To paraphrase Sir Karl Popper, implicit in any true explanation of a phenomenon is a prediction, and implicit in any prediction about a phenomenon is an explanation. So a good way to figure out how much of a field is true scientific explanation vs. 'mere stamp-collecting' is to check how much it deals with predictions, whether explicit or implicit. Psychology seems to be a primarily descriptive field that's attempting to translate its rich (yet predictively shallow) descriptive ontology into a more prediction-based science.


[2] This point on the fuzziness of psychiatry was made rather eloquently in an Op-Ed by Simon Baron-Cohen (the famous autism researcher, and the first cousin of British comedian Sasha Baron-Cohen) in this week's New York Times:
This history reminds us that psychiatric diagnoses are not set in stone. They are “manmade,” and different generations of doctors sit around the committee table and change how we think about “mental disorders.”

This in turn reminds us to set aside any assumption that the diagnostic manual is a taxonomic system. Maybe one day it will achieve this scientific value, but a classification system that can be changed so freely and so frequently can’t be close to following Plato’s recommendation of “carving nature at its joints.”

Part of the reason the diagnostic manual can move the boundaries and add or remove “mental disorders” so easily is that it focuses on surface appearances or behavior (symptoms) and is silent about causes. Symptoms can be arranged into groups in many ways, and there is no single right way to cluster them. Psychiatry is not at the stage of other branches of medicine, where a diagnostic category depends on a known biological mechanism. An example of where this does occur is Down syndrome, where surface appearances are irrelevant. Instead the cause — an extra copy of Chromosome 21 — is the sole determinant to obtain a diagnosis. Psychiatry, in contrast, does not yet have any diagnostic blood tests with which to reveal a biological mechanism.

[3] I realize this is somewhat vague. I plan to expand this description of what I think of as "mid-level functional attributes" and the sorts of concepts and tools I think may be useful for dealing with them. One example of a mid-level measurement that struck me as promising was a work correlating lack of microstructural integrity in the uncinate fasciculus with psychopathy.

Edit, 6-29-10: Recent work from UMN and Yale has correlated brain region size differentials to Big 5 personality traits with some success.

11.11.2009

HIC SVNT DRACONES

The new site redesign is now live! Thanks to some beautiful artwork by my friend Corby, and some ugly html hacks by me, Modern Dragons now features a dragon. Speaking of which, it's high time to answer the question:

What's a Modern Dragon anyway?

Back in the Middle Ages, cartographers used to (anecdotally, at least) mark unknown or dangerous territories on their maps with the Latin phrase, HIC SVNT DRACONES-- literally, "Here be Dragons". By metaphor, then, the purpose of this blog is to locate, explore, and perhaps take a swing at the analogous dragons in our modern age-- the puzzles, frontiers, and dangerous elements within science, culture, and this terribly uncertain future of ours.