It’s the comparative silence between the firing spikes of neurons that tells what they are really up to, scientists report.
“The brain appears to use these durations of silence to encrypt information,” Dr. Joe Z. Tsien, neuroscientist at the Medical College of Georgia at Augusta University says of his new Neural Self-Information Theory.
It’s widely held that neurons generate perceptions, thoughts and actions by emitting electrical pulses called action potentials or spikes. One problem with that standard measure of neuron action is that neurons are essentially always firing at some level and with spontaneous fluctuation, even when it’s not clear what is happening as a result, says Tsien, Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology and a corresponding author of the study in the journal Cerebral Cortex.
He uses the analogy of an ocean surface that may look calm compared to a tsunami, but is never truly still. Many scientists have noted that there can also be variation in how even the same neuron responds to the same stimulus or even a quiet, resting state. Yet, there must be some kind of operating principle that enables us to think and act in real time in the face of this ongoing variability, he says. Brain scientists call the decades-old puzzle cracking the neural code.
Tsien’s team has evidence from monitoring mouse neurons during various activities that the magic happens when you see a group of neurons each entering an atypical state for them – not of firing – but of the relative periods of silence between the firing and entering that period at the same time.
These silent spaces between overt firing are called interspike intervals, and, the neurons having atypical intervals at the same time are part of a clique generating perceptions, actions and thoughts in real time, he theorizes.
“These cells belong to the same group, an assembly,” Tsien says. “It’s a very general finding about how neuron activity codes information.”
Applying this new Neural Self-Information Theory, they have identified 15 groups of cell assemblies in the cortex and hippocampus of the brain that work together to enable things like sleep cycles, sensing where you are and how you act in response to things you see and experience.
For example, they studied mice playing a game where a light shines on a wall and the mouse learns that if he pokes a hole in that same spot, rather than four other choices, he will get a food pellet when he returns to where he started. If he doesn’t come back in time or pokes the wrong hole, no food pellet awaits. “It’s a simple task but highly attention driven, and how the brain executes this task was poorly understood,” Tsien says.
“To identify the cell cliques that help the mouse be successful, you have to find out what each neuron’s interspike intervals looks like when they are out of their normal range of occurrence,” Tsien says. “Among all the cells you record, you then identify the ones that move into that different state – called a surprisal state – at the same time.” This time he uses the analogy of a normally chatty individual in an uncharacteristic period of silence.
“That is when these cells start to act as a clique,” he says. “That is when the neural cliques are coming together to encode a train of thought or a set of actions. If it’s what happens usually, that means it does not carry much information, it’s like a ground state,” Tsien adds.
Tsien’s team reported in 2016 in the journal Frontiers in Systems Neuroscience.
Evidence for his Theory of Connectivity about the basic wiring logic of the brain. That’s how a group of cells, called a neural clique, come together to form simple and more complex thoughts to deal with whatever situation life hands us. These cell groups, in turn, form functional connectivity motifs to handle the more complex situations. He used the example then that making the important choice between food and a mate would require three cliques: one that represents food, another that represents thoughts of a mate and a third that combines both and recognizes they are both important.
He postulates that these neuronal relationships are long-term and are one way our brain makes maximum use of the neurons we have.
His clique theory is about how the brain is wired, but the new work is more about which neurons form the cliques and how you recognize them, he says. “To better understand the brain, we are examining both sides of the coin: its hardware designs and its running codes,” he says.
The immediate goal of the studies is to better understand how our brain functions, Tsien says. Longer term, it may enable us to identify areas of the brain you want to silence, to activate or even to recreate, he says.
The brain’s operating principles may also be applied to neuromorphic computing – when the computer’s circuits mimic biological ones – and artificial intelligence systems, he says.
Dr. Edgar Adrian, an English electrophysiologist, discovered in 1926 that when he put increasing pressure or tension on its connecting muscle, a frog nerve increased in firing, concluding that changes in the nerve cell’s rate of firing, or the numbers of spikes in a given period of time, is the basis of how the brain conveys or codes information. The work earned him a Nobel Prize in 1932.