This post is part of an ongoing online book. To access the other parts, please refer to the contents page of the book.

The brain is unbelievably complex. Walking in a park, having a conversation with your friends, understanding what you’re reading, hearing, being aware of others’ feelings, these are enormously complex processes. They might seem trivial to us, but when looking at them from a machine’s perspective, we can see how impossibly difficult they really are. All that is somehow governed by a huge cluster of tiny cells in our heads, also known as neurons. A neuron in itself does not ‘‘know’’ anything, it is just a biological machine, but somehow a bunch of neurons together form a consciousness. This, is one of the greatest unsolved mysteries of our time.

It seems like the mind is an emergent property of the brain. “Emergence” is when a system has properties which its parts do not have, but emerge because of rules, or interactions between its parts. Let’s take an ant colony as an example. An ant is pretty dumb. Its decision making abilities are limited, and individually it can’t plan anything ahead. Yet an ant colony is enormously complex and smart. Ants in a colony have different “jobs”, perform various collective tasks, and live in self-made structures that are way too complex for the mental capability of a single ant. One could arguably call an ant colony an organism in itself. It can make complex decisions as if it were a single entity, yet under the hood it’s all governed by a decentralised system of tiny, almost brainless ants. Our brains are not much different in that regard. The brain is a collection of dumb machines, that somehow together do very smart things.

Parts in an emergent system often follow very simple rules. When those parts interact with one another, complexity can arise. During the evolution of the nervous system, many mental algorithms emerged, each from simple neuronal interactions, and each giving organisms a fascinating ability. The ability to form memories, to imagine, to infer the actions of another organism, all these abilities are derived from emergent algorithms that “run on top” of neural circuits. When talking about ‘‘algorithms’’, we should not imagine a software running on a computer. Even a vending machine can be viewed as an algorithm. You put coins into the machine, select an item, and it will perform a specific sequence of steps. Procedures, that’s all what algorithms are. In the case of the mind, the neural circuits in our head are what enable such procedures to emerge.

Memory, induction, the ability to detect movement, the ability to detect the passage of time, are all emergent algorithms. Even more complex processes, such as deduction, creativity, or imagination fall in the same realm, all these processes somehow emerge from neural circuits. Changes in these neural circuits or neural structures, can result in changes to an algorithm, or in some cases even create a new algorithm. If a useful algorithm appears due to a mutation in one of the neural circuits, that organism will have a competitive advantage over other organisms. Evolutionary forces can then shape the circuits and the neuronal structures over time, in a way that makes the emergent algorithm more efficient.

This is precisely why I argue that it is crucial to understand the biology of the brain. If an emergent algorithm significantly helps an organism survive, and as a result enables the organism to have on average more offspring, then any organism with a more optimised circuitry for the emergent algorithm, will have a bigger advantage over other organisms. This might sound over complicated at first, but what this means, is that over a long time, evolutionary forces will shape the underlying neural circuitry in a way that optimises for the emergent algorithms. This means that it should be possible to reverse engineer the emergent algorithms, by looking at the optimised physical structures and biochemical processes of neural circuits.

Always remember: In biology, form follows function. If there is a specific pattern in nature, that repeats over and over again in different species, we can be very confident that the repeating pattern has an important role. By looking at the optimised morphology, structures, and processes of different neurons, we can start to guess their purpose in the context of the mind. Notice how I am not saying “in the context of learning”, but “in the context of the mind”. The ability to learn, i.e. the ability to infer correctly future events based on past experiences, is only one emergent phenomenon in a set of phenomena to which I am referring to as “the mind”. To decipher how the mind works, I argue that we have to treat it as a set of emergent algorithms. The ability to learn, deduce, imagine, memorise, all these processes are emergent algorithms that are part of the mind. I want to emphasise one thing however: I am not claiming that we can understand all emergent-phenomena of the brain by looking at individual neurons. Even if we know a neuron inside out, it doesn’t tell us much about how a group of neurons interact with one-another. A reductionist mindset will not bring us far when trying to decipher the mind. By treating the brain on the other hand as a complex system with several emergent properties, we can start to guess how different parts interconnect with one-another.

In this book, I am particularly interested to examine the algorithm that gives rise to memory. When I say “memory”, I do not only mean the ability to recall past events. What I mean by it, is the ability to store external, or behavioural information in neurons and neural circuits. Neuroscientists often use different terms for memory, such as explicit memory, or implicit memory, depending on the kind of memory we are talking about. In this book any information that gets stored in neurons, or neural circuits, will simply be referred to as “memory”. Do not get deceived by the term however, memory in the context of the brain is very different from the memory we use in computers. Today’s computers are based on something known as the von Neumann architecture, where we have a CPU (the basic processing unit), and a memory. In this setup, for any kind of processing, the memory has to be moved from the memory unit to the processing unit. This is time consuming and quite energy inefficient. The setup for processing and memory in the brain on the other hand is much more sophisticated. Computing and memory are not distinctly separated in the brain, they go hand in hand. somehow they are co-located in the neural circuits. It is truly mind blowing when you think about it - somehow the mesh of neurons located in our skull, is able to rapidly store external information and also retrieve it shockingly fast and accurately. To this day, we have no idea how the brain does it, but we are getting closer! We know for example that a memory tends to be associated with specific activation of neurons. Each of our experiences and behaviours somehow gets associated with a neural pattern.

When you smell a flower for example, a specific pattern of neurons in your brain will fire together, giving you that specific sensation. Once that pattern appears, a cascade of other patterns can emerge. The odour of the flower might trigger a specific memory. You might remember a meadow that you once visited as a child. An emotion might emerge in the cascade of patterns, causing you to feel nostalgia. All this because of a single neural pattern that was caused by the odour of a flower. How was that pattern formed? The question is not only about the physical substrate of memory (which we will examine in much detail), but about the coordination of read and write operations as well. How does the brain know how to form patterns, and how does the brain know which patterns to activate in which scenarios, and at which time? Unlike computers, the real brain is highly asynchronous.

“Asynchrony” is when events in a system do not occur in a sequential order, but rather during overlapping time periods. We say that something runs “concurrently” when events, or computations can advance without waiting for all other events, or computations to complete. In the case of the brain, the bursts of firing neurons represents the concurrent events. How can brain activities converge into specific neural patterns, when the whole system runs concurrently? If a neuron fires just a bit too early, or a bit too late than the other neurons, the desired pattern might not emerge, which could for example lead to an execution failure of an action down the line. As an example, you might see a ball flying at your direction, fail to process the right action due to a bad timing between neurons in your brain, and get hit by it in the face. The fact that such scenarios do not happen all the time, makes this all the more intriguing. The problem of asynchronous communication in neural circuits is especially tricky, because none of the neurons “know” when the other neurons will fire, but somehow they still manage to coordinate and act as one. Any computer scientists can tell you how difficult it is to build a working, robust asynchronous system. It is in fact a nightmare. And yet here we are, each one of us carrying with us the most complex asynchronous machine in the known universe, and it spends less energy than a light bulb.

The problem of neural coordination can be categorised as a Byzantine Generals Problem. The Byzantine Generals Problem was first named by Lamport, Shostak, and Pease in a wonderful paper in 1982, which not surprisingly was titled ‘‘The Byzantine Generals Problem’’. The abstract idea of it is simple: Imagine that you are a general in the byzantine army and are planning to attack an enemy fortress. The fortress is completely surrounded by several of your battalions, each controlled by a different general. If you all manage to attack the fortress at the same time, you will succeed in taking it down. An uncoordinated attack on the other hand, will end in defeat. How do you make sure that all battalions will attack at the same moment? Even though the Byzantine Generals Problem was originally conceived and formalised as a condition in distributed computing systems, we can see how neural circuits share a very similar problem. neurons are more likely to fire when they receive their inputs simultaneously. It seems like right timing is a necessary component for proper pattern formation and activation. As we will see later in later chapters, the temporal aspect of the neural coordination problem plays a key role in memory formation.

Besides emerging from completely asynchronous processes, memory has also another very interesting property - It is “re-programmable”. Different organisms will have different memories, depending on their experiences. You were not born knowing what an apple is, but once you look at an apple, once you taste one, you will store a mental representation of the apple in your brain. Even more fascinating, next time you see an apple, you will know what it is, even though the apple you had the first time looked slightly different from the new one. The ability to figure out what something is, based on past experiences is also known as inductive reasoning, and it proved to be a very useful tool in the evolution of animals.

Besides re-programmable memory, living organisms also have hard-coded memories, which evolved out of necessity. Ants for example (and many other social insects) have an interesting imprinted behaviour known as necrophoresis. Whenever there are any dead bodies of other members in the colony nest, or in the highways where the ants travel, the ants will carry the corpses and put them in a pile that is far enough from the colony. Ants never learned this kind of behaviour, they were born with it. In other words, this behaviour is pre-programmed in their genetic code, it did not emerge through experience. We can find even more interesting cases of hard-coded behaviours in honeybees. Honeybees communicate the location of nearby flowers with dance. They perform either “waggle dances”, or “round dances” inside their beehive, in the presence of other bees. The other bees receive the information from the dance, and then fly to the specific flower location. The preciseness of the honeybee language is astonishing, what is even more surprising however, is that it is encoded in their DNA. Different species of bees will have different “dialects” for communicating with one-another. Back in 1995, T. E. RindererL. D. Beaman showed how certain dance dialects differences followed simple Mendelian rules of inheritance.

Even us humans have encoded memories. Mammalian infants for example can smell and instinctively reach the mother’s breast after birth. Human babies are even able to automatically open their mouth when something is near their mouth, or to start sucking when something is in their mouth. This kind of behaviour was not taught, the information for it was already encoded in the brain - we were born with it. However, only the most crucial of behaviours can be encoded genetically. Adopting new behaviours by encoding memories in the genetic code of a population’s gene pool is a very slow and inconvenient process for most tasks. A mobile organism will need to assess more situations than a static one, and because of that, it will need to have a much larger pool of possible behaviours to pick from. Coding all that into a genome becomes quickly infeasible. Infants for example are not born with the ability to see, or perform complex movements, as both these tasks are so complex, that they need to be developed through experience.

The first multicellular mobile organisms thus needed some kind of hardware module, not only with imprinted procedures but with re-programmable memory as well. Mobile organisms needed a way to store new experiences and behaviours on the go. A key word here is “mobile”. A lot of scientists in fact believe that movement is why organisms evolved brains in the first place. In the race for survival, mobile organisms had to evolve the ability for ever more adaptive and complex movements and survival strategies. This became only possible once re-programmable memory evolved. As in previous biological systems, the hardware and algorithms of these organisms were pre-determined by their DNA. The “software” (or the memory) on the other hand was able to change dynamically within an organism’s lifetime. This was a game changer. Suddenly the complexity of the behaviours an organism could perform was not bound by the slow, genetic instructions that they carried around with them. Organisms were for the first time able to perform quick adjustments to their own software.

Amusingly enough, without mobility, the need for a brain also disappears. An interesting example for this is the intriguing life-cycle of the sea squirt. This animal, at the beginning of its life-cycle, lives as tadpole-like larvae with a developed nervous system and the ability to swim. Since the larvae is not capable of feeding itself, it will try to find a nice rock and cement itself on it, where it will live for the rest of its life. Once it settles like this, a fascinating transformation unfolds. The larvae starts to digest all its tadpole-like parts, including its rudimentary little brain, to transform into a developed sea squirt. Since it does not move anymore, it does not need a “brain” to live. There is however also another very interesting link between mobility and re-programmable memory.

It might sound strange, but our sensory systems also require re-programmable memory. Let’s take the visual system as an example. For our neurons to be able to make sense of the world, they need to be able to detect edges, shapes, colors and depth from the visual input. That kind of information has to be encoded, i.e. stored somehow by the neurons. That’s where the re-programmable memory comes in. Note again that when using the word “memory”, I am referring to the ability to store external, or behavioral information somehow into neural circuits. Thus for the development of vision, memory is required. What’s even stranger though, is that the feedback from movement is just as important. As long as an organism is immobile, it will not be able to learn how to see. Richard Held and Alan Hein made this clear in a fascinating experiment in 1963 known as the “Kitten Carousel”.

The researchers placed two very young kittens into a carousel that was ringed in vertical stripes. The kittens were still too young to have a developed visual system. The first kitten was able to walk freely inside the carousel according to its own actions, while the second kitten was riding in a gondola inside the carousel, without the ability to control its movement. Both kittens would receive the same kind of visual stimuli from the attached vertical stripes when moving in the carousel. One would expect the two kittens to develop normal vision, but that was not the case. Only the kitten that was able to walk freely developed normal vision. The kitten that did not move but rode the gondola, never managed to learn how to see properly. Its visual system did not undergo any significant changes like that of the first kitten. It seems like vision is more than just receiving inputs to our eyes. To learn how to see, one requires feedback loops from other signals that an organism generates, such as movement. The end result of all the signals and feedbacks is the encoded memory that enables the ability of vision.

We are only now starting to understand how the mechanisms for such “re-programmable memory” work. This might sound a bit far-fetched, but we are in fact in a very similar position to where Gregor Mendel, the father of modern genetics was almost two centuries ago, when he proposed his famous unit of inheritance (today known as the gene). Mendel was the first person to notice predictable patterns in the inheritance of traits. After thousands of experiments, back in 1865, Mendel proposed a hypothetical unit for inheriting traits. He called these heredity units “factor”. Back then, people knew that different traits were inheritable, but very few people however bothered to ask how traits were inherited. It is important to remember that back then, Mendel had absolutely no idea what genes were, what their physical substrate was, or how they worked. These hereditary units were purely hypothetical, but they could explain very well his experimental observations.

Today, a lot of scientists believe in another such hypothetical unit, a unit not for traits, but for memories. We refer to those hypothetical units as Engrams, or memory-traces. Mendel’s hypothetical unit served as a storage medium for inheritance, while the hypothetical Engrams serve as a storage medium for cognitive information. Its exact mechanisms and the locations of the Engrams however remain enigmatic. We do know some things at least. Remember how different experiences, such as odours, could activate different patterns of neurons in the brain. We know that assemblies of neurons in our brain are somehow associated with specific experiences. Whenever an experience occurs, the neurons of the associated pattern fire together. Donald Hebb, a famous neuropsychologist, noticed this strange phenomenon, and proposed a marvellous theory in his 1949 book ‘‘The Organization of Behaviour’’. Today Hebb’s proposal is known as Hebbian Theory, or Hebb’s rule and it explains how assemblies of neurons can become Engrams. Hebb stated that:

'’Any two cells or systems of cells that are repeatedly active at the same time will tend to become ‘associated’ so that activity in one facilitates activity in the other.’’

In laymen terms, Hebb’s rule states that “neurons that fire together, wire together”. That over simplified statement is indeed what we observe. If an input to the brain (such as an odour) causes the same pattern of neurons to activate repeaditly together, that pattern will become interassociated - it will start to fire together. Many scientists believe that this pattern “auto-association” is the perfect candidate for the engram. There is now overwhelming evidence that ensembles of neurons play an important role in memory storage. A fascinating experiment in 2019 for example, showed how by artificially activating behaviourally relevant ensembles of neurons, one could observe consistent behavioural responses from test animals. In other words, whenever the specific behavioural pattern was artificially activated, the mice performed the behaviour that was associated with the pattern. To achieve the forced pattern activation, the researchers genetically engineered the mice to have neurons covered with special proteins, that when hit by a light-beam cause the neuron to fire. This ingenious technique is also known as optogenetics. What was even more fascinating however, was that some ensembles of neurons had pattern completion properties. By simply artificially activating two of the ensemble neurons, the researchers were able to trigger the entire ensemble to activate. The results of the experiment beautifully coincide with Hebb’s rule.

In the scientific community, the biological process behind Hebb’s theory is also known as “Spike-timing-dependent plasticity” (STDP). If neuron A activates neuron B, spike-timing-dependent plasticity is the process that adjusts the connection strength between the two neurons, depending on the timing of the activations. Hebb’s statement however, that “Any two cells that are repeatedly active at the same time will tend to become ‘associated’ so that activity in one facilitates activity in the other”, is an observation, not a mechanism. Why do two neurons become active at the same time in the first place? What exactly is happening at a cellular level when neurons fire? Might the process of engram formation actually be more complicated than in the proposed spike-timing-dependent plasticity model? This is what I will aim to answer in the upcoming chapters. We will learn how engrams are really formed and stored.