Computation at the critical point
How brains harness chaos to drive adaptive and intelligent behavior
Biological brains have more in common with nuclear reactors, avalanches, and forest fires than they do with artificial neural networks. They don’t rely on floating-point numbers or differentiable operations, nor do they compute gradients or optimize a loss function. In fact, they don’t even have a loss function. A biological neuron is more like a particle interacting with others in its neighborhood, unaware of the bigger picture. So, what keeps everything in check?
Consider a nuclear reactor. It operates by balancing the number of free neutrons. When a neutron collides with a uranium atom, the atom splits, releasing energy and more neutrons. If each split atom leads to exactly one more splitting, the reactor remains stable. If more than one atom splits, the reaction grows exponentially, leading to a meltdown. If fewer than one, the reactor shuts down. This balance point, where activity neither dies out nor spirals out of control, is called the critical point.
The same concept applies to avalanches. Before an avalanche, the weight of the snow and the friction holding it in place are balanced. A small disturbance might cause a minor slide, but if the mountain is near the critical point, that small nudge could trigger a massive release of energy. The critical point is where the system teeters between stability and collapse, and a slight push can drive it into a completely different state.
Similarly, in a dry forest, a spark might burn a few leaves or start a wildfire, depending on the balance between dry vegetation and humidity. If the forest is at a critical point, a small trigger can lead to dramatically different outcomes, from a tiny fire to a raging inferno.
Neural activity works in much the same way. Neurons naturally organize their firing so that activity stays roughly balanced. If too many neurons fire at once, it can trigger a runaway chain reaction, like during an epileptic seizure. On the other hand, if activity dies out too quickly, information gets lost, as seen in conditions like Alzheimer's. Maintaining this critical balance is crucial for stable brain function and efficient information processing.
Information processing is optimal at the critical point
The implications of the balanced nature of neuronal activity are profound. At the critical point, several key information processing capabilities are maximized, which enable the emergence of intelligent behavior. Let's expand on a few of them:
Information Transmission
Here’s a thought experiment. Imagine a long section of neural tissue full of interconnected neurons. On one end, you have 100 neurons that you can stimulate, and on the other, 100 neurons you can monitor. You randomly stimulate 50 input neurons, and they fire. Below the critical point, the activity dies out as it travels, so you may read out fewer than 50 active neurons at the other end, maybe none. Above the critical point, the activity gets amplified, and you might read more than 50 active neurons, possibly all of them.
In both cases, you lose information about how many neurons you activated. The closer the system is to the critical point, the better it transmits that information over longer distances, maintaining the fidelity of the input stimulus.
Information Storage
Now, take the same long section of neural tissue and connect the ends, forming a loop. After stimulating a set of neurons, the wave of activations will circle around. Just like with transmission, the wave best preserves its density at the critical point. It doesn’t die out or explode, it just keeps circulating. You can send multiple waves, and they’ll stick around for a while, maintaining both their density and the timing between them. That’s a good candidate for working memory.
I must admit, this is a very simplified explanation. Real tissues are not wired in a loop but form a complex mesh with recurrent connections that exhibit this sort of persistence of signal properties and timing.
Susceptibility and Dynamic Range
Susceptibility refers to how responsive the system is to small variations in its inputs. In neural networks, this means that a slight change in input can lead to a disproportionately large change in activity. At the critical point, the network is most susceptible, making small perturbations easily detectable.
Dynamic range refers to the system’s ability to respond to both very weak and very strong inputs, capturing a wide spectrum of input magnitudes. In the context of a neural network, consider two extremes:
If the system is below the critical point, it has a narrow dynamic range, meaning weak inputs barely produce a response, and strong inputs don’t fully propagate either because activity quickly fades.
- If the system is above the critical point, the dynamic range is also limited because almost any input, no matter how weak, triggers a strong response that spreads uncontrollably.
At the **critical point**, the network is more flexible. It can respond proportionately to weak inputs, generating small cascades of activation, while also responding appropriately to strong inputs by producing large cascades.
Both factors make it possible for us to perceive differences across many scales, like hearing the difference between 100 Hz and 200 Hz, or 10,100 Hz and 10,200 Hz.
Scale-Free Behavior
At the critical point, the neural cascades also exhibit scale-free behavior. In scale-free systems, there’s no dominant event size. Small, medium, and large events are all possible, and their frequency follows a power law. For a neural network this means:
A single neuron firing could trigger a tiny response involving just a few neurons or a massive cascade affecting large parts of the network.
No set pattern of propagation dominates. The system is highly adaptable, responding with cascades of varying sizes depending on how it’s triggered.
Because the system can handle anything from localized to widespread activity, it can process information at many different levels. A small input might stay localized, or it could grow to involve larger areas of the brain, depending on the state of the system.
For example, when you hear a faint sound, only a few neurons may fire, and the response remains localized in a small area of your brain. But when you hear a louder or more complex sound, like music, the neural activity may spread, involving multiple regions —from auditory processing centers to memory areas— depending on the input's complexity. Importantly, the brain doesn’t need to switch to a different "mode" of operation to handle these variations. Whether processing a subtle sound or a rich musical composition, the brain can seamlessly recruit the appropriate number of neurons, adjusting dynamically to the demands of the task.
How does the brain self-regulate into a critical state?
Achieving and maintaining a critical state in the brain is nothing close to a spontaneous process. Criticality is such a delicate point that the brain needs active regulation to stay near it. So much so that rather than being perfectly critical, the brain operates in a quasicritical state, hovering close to but not exactly at the critical point. Various processes participating in this regulation are, ordered from faster to slower time-scales:
Firing rate homeostasis ensures that the neuron returns to its average firing rate. If a neuron is firing too often, the neuron becomes less excitable. It takes time to build back its firing potential. If the firing rate is too low its firing potential increases, making it more likely to respond to stimuli.
Synaptic scaling adjusts the strength of all synapses on a neuron proportionally. Their relative weight doesn't change, just the excitability of the neuron does. If the neuron becomes too active, the strengths of all its synapses are scaled down, reducing the impact of incoming signals. If the neuron is not active enough, the strengths of its synapses are scaled up, making it easier for inputs to activate the neuron.
Hebbian-like synaptic plasticity makes individual synapses stronger or weaker depending on whether the neurons on both ends fire simultaneously, indicating a shared interest. In this case, the relative weight between synapses does change, but not necessarily the excitability. This function also leads to learning and long-term memory formation.
Sprouting and pruning of connections changes the connectivity of the graph. A higher number of branching connections facilitates the proliferation of activity, whereas a lower number impairs transmission. Like Hebbian adaptation, this also promotes the learning of activity patterns.
It’s a developmental process
A newborn brain is densely connected, but the journey to adulthood is marked by the pruning of these connections. In fact, research shows that a higher rate of pruning during adolescence is linked to greater intellectual ability 1. Learning can be thought of as a process of eliminating irrelevant details, leaving only the core principles. This pruning may be an adaptation strategy of the overly connected brain to achieve stable criticality, a process that unfolds over months and years.
Sleep deprivation has been shown to push the brain into a supercritical state, leading to cognitive impairment and increasing the risk of strokes in extreme cases 2. I propose that young brains, due to their higher connectivity, are naturally prone to a supercritical regime, which may explain why infants and children need more sleep to maintain stability. A baby's crying, for example, might be a reflection of activation avalanches caused by this slightly supercritical state.
It’s an active regulation process too
External stimuli can also disrupt the brain's critical state. A loud sound in a quiet environment can overwhelm the system, taking some time for the brain to adapt. Some cars, for instance, trigger a loud sound milliseconds before a crash to help the brain brace for the bang of the airbags. Similar responses occur in other sensory contexts, such as touching hot water with cold hands or turning on a light in a dark room. These stimuli temporarily tip the system out of balance, and the brain works to restore criticality.
More significant disruptions, like brain surgery, can take days or weeks for the brain to recover. Researchers regularly observe lab cultures of neural tissue grow and self-organize into stable firing patterns over the period of a few weeks. This self-organization is a fundamental drive encoded in the DNA of nervous system cells.
In summary, achieving and maintaining criticality is not automatic. The brain must continuously reorganize itself, a process that can take years of rewiring and fine-tuning. This delicate balance is crucial for efficient information processing and adaptability.
The bigger picture
Criticality offers vast potential for computation. We've already discussed how it optimizes key information processing properties, but there's more to explore at higher levels of organization.
Regulating attention states
The brain can regulate the criticality of different areas, fine-tuning them for various tasks. For example, when a region is moved farther from criticality, its ability to process information weakens, effectively silencing it. This allows other, more relevant areas to operate with less interference, increasing attention. Have you ever struggled to remember a person’s name, only to have it pop into your mind effortlessly during a shower? This happens because high-attention situations prioritize brain regions deemed relevant, reducing the cognitive load and increasing focus. The name mishap is a quirk in this strategy, showing us how the brain dynamically shifts its resources.
Learning to model the world
A larger question remains from this discussion: Does criticality alone drive intelligent behavior, or is it just a low-level detail, overshadowed by more abstract processes? I argue that criticality could in fact be a central objective driving the whole system towards agency. Let’s dive into a thought experiment to explore this idea.
Imagine a densely interconnected region of the cerebral cortex receiving external inputs. To maintain criticality, the system must learn to manage incoming signals that it can't directly control or they may cause avalanches of activity. One option would be to ignore these signals by shutting off the receiving neurons so they don't spread them. But because receiving neurons are also connect to internal neurons and participate in the internal dynamics, disabling them would also disrupt internal communication, pushing the system into a subcritical state.
A better option is to account for these inputs by momentarily muting local activity. If inputs follow predictable patterns, the network can adapt so that certain inputs trigger inhibitory neurons, blocking unrelated neurons. This creates a narrow pathway of critical neurons ready to transmit the expected signal, while others remain subcritical. If the expected signal arrives, it travels down the pathway, triggering new inhibitory neurons that continue steering the next expected signals. But if the signal is unexpected, it dies out in subcritical areas and the critical pathway does not get the anticipated spike of activity. As activity decreases, the inhibitory neurons lose their influence, and the critical pathway expands, making the system more receptive to new patterns until it locks onto the correct prediction again. When the pathway expands, the brain becomes more sensitive to unexpected inputs. This translates to potentially larger avalanches, which we experience as surprise.
Learning to act on the world to improve predictions
This process where neurons form internal models by predicting inputs shows how the brain creates a predictive framework. Inhibition acts as an internal prediction mechanism, but outputs can also help. For example, imagine this cortical region specializes in identifying apples, and it's also wired to muscles controlling eye movement. As we look at an apple, a critical pathway forms that predicts apple-related inputs. Meanwhile, as other brain areas carry out their tasks, the body moves, causing the apple to leave our field of vision. This creates a potential surprise. The network, however, can leverage its output to steer the eyes back to the apple, preserving its predictive accuracy and lowering the potential for surprise. As mentioned above, a higher brain process could modulate this entire area to focus the attention on apples or ignore them.
This is a simplified example, but it illustrates how agency can emerge from a system striving to maintain criticality while engaging in a two-way feedback loop with its environment. We've also touched on the concept of surprise, which plays a key role in predictive coding and active inference —frameworks increasingly used to explain agency and intelligent behavior. I’ll be writing more about these in upcoming posts, so stay tuned!
Sources
Just a heads-up: links to Amazon are affiliate links. If you decide to make a purchase through them, I’ll earn a small commission at no extra cost to you.
This post was largely based on the great book The Cortex and the Critical Point, by John M. Beggs. It's a dense but highly revealing introduction to the concepts and research on this topic.
P. Shaw et al., “Intellectual ability and cortical development in children and adolescents,” Nature, doi: 10.1038/nature04513.
C. Meisel et.al., “Fading Signatures of Critical Brain Dynamics during Sustained Wakefulness in Humans,” J. Neurosci., doi: 10.1523/JNEUROSCI.1516-13.2013.