The Neurobiology of Learning and Memory

Having grown up in a coastal town, the smell of the ocean evokes strong memories. Images of days in the sun, sounds of the crashing waves, the feeling of sand between my toes, the taste of salt water, and the joy of a relaxing afternoon replay in my mind’s eye like a faded home movie of experiences long since passed. The power of memory allows us to create and access a mental representation of the world, or to be swept up in the emotions of previous events. What is this ability to store and access information? What happens to make it possible, and what goes wrong when this power fades, as in dementia? Understanding how learning and memory works on the biological level is one of the most important goals of modern neuroscience.

To frame what’s going on in the above scenario in the language of biology, let’s zoom in to the microscopic level and explore what’s really happening. Molecules we can smell (called odorants) waft through the sea air and land on a mucous covered membrane deep inside the nose called the olfactory epithelium. Nerve endings in this epithelium detect these odorants and generate an electrical signal in the nerve cells, or ‘neurons’, to which they belong. These neurons’ electrical responses to odor molecules are reliable and reproducible, so familiar smells will generate a familiar pattern of electrical activity, carrying information about what they detected and relaying that to the 3-pound chunk of meat we know as the brain. The brain has approximately 100 billion neurons (!) that work together to process the sensory information coming from the nose and to form the perception we know of as smell.

Obviously, the brain isn’t simply a stimulus-response organ; your perception and the accompanying memories evoked by the same odorants will be completely different from mine because the brain processes that information in the context of each individual’s unique history. For example, someone who had nearly drowned at the beach might respond in a panic at the smell of the salt air, while others drift to calming thoughts of tropical drinks on a vacation. Whatever the memories are, it’s likely you didn’t work hard to put them there. It just happened automatically. Storage, processing, and recall of information are primary functions of the brain, which unlike a computer, doesn’t save information by flipping bits on a hard drive. It must use the tools of biology: chemicals, genes, proteins, and fats. The “synaptic plasticity and learning hypothesis” is the foremost conceptual framework for understanding memory through the lens of biology. This post will describe the basic concepts behind this hypothesis.

Let’s break down the term ‘synaptic plasticity’ and explore each word separately.

‘Synaptic’ refers to the synapse. A synapse is a point of communication between two cells where the axon of a ‘presynaptic’ neuron approaches the dendrites of a ‘postsynaptic’ neuron. It’s important to note that at most synapses the neurons don’t actually touch. Instead, they come within 20 nanometers (0.00000002 meters!) of each other, leaving a gap between called the ‘synaptic cleft’.

The synapse was discovered by Santiago Ramόn y Cajal, the father of modern neuroscience. Ironically, Ramόn y Cajal shared his 1906 Nobel Prize with Camillo Golgi, who fiercely denied the existence of synapses. Understanding that neurons don’t actually touch at synapses changed the way the world thought of the nervous system. The synaptic cleft prevents neurons from sending electrical signals to each other directly, but allows a chemical form of neuron-to-neuron communication called ‘neurotransmission’. During neurotransmission, a presynaptic cell fires a rapid electrical signal down its axon called an ‘action potential’ which causes release of chemicals called ‘neurotransmitters’ onto a nearby neuron. Depending on the particular neurotransmitter, this event communicates to the postsynaptic cell that it should increase or decrease the probability it fires its own action potential. Most people are familiar with some neurotransmitters, such as dopamine or serotonin. What you may not know is that the most important neurotransmitter in the brain is an amino acid called ‘glutamate’. Glutamate is an ‘excitatory’ neurotransmitter because it stimulates neurons to fire. Although the role of other types of neurotransmitters are fascinating, the rest of this post will focus on glutamate and excitatory transmission.

Figure 1: Synaptic Transmission     A presynaptic cell fires an electrical signal called an action potential (1), which triggers synaptic vesicles (2) to merge with the neuron’s outer membrane (3) and release glutamate (4) into the synaptic cleft. Glutamate binds to postsynaptic glutamate receptors (5) causing them to open and allow positively charged sodium ions (Na+) into the cell. If enough sodium enters the postsynaptic neuron, it will fire its own action potential.​


Ok so now that we’ve introduced the idea of the synapse, let’s touch on the concept of plasticity. Plastics are malleable. They can bend or be heated to change shape. Synapses are also capable of change, particularly in their strength and size, and are thus said to have plasticity. For the sake of argument, let’s pretend that any given synapse can be rated on a strength scale of 1–10, where a 1 represents an excitatory synapse that has a barely noticeable impact on the postsynaptic neuron, and a 10 represents a synapse that when active causes postsynaptic firing 100 percent of the time. There are biological processes that allow neurons to increase or decrease the strength of synapses so that a 1 might be made into a 5, or a 10 dropped to a 4, and so on. Collectively, these mechanisms of change are called synaptic plasticity. Plasticity is a diverse phenomenon, it comes in different varieties involving multiple neurotransmitters. The specific molecular mechanisms that allow synapses to alter their strength also vary, depending on which neurons are doing the talking and the listening. For the sake of simplicity, we’ll focus only on two forms of plasticity in the hippocampus: long-term potentiation (LTP) and long-term depression (LTD). These are the ability to increase or decrease synaptic strength, respectively.

The details of LTP and LTD can get overwhelming in a hurry, so I’ll cut it down to the bare essentials. The excitatory neurotransmitter glutamate is the star of our show. At the tips of a presynaptic neuron’s axon, where it synapses onto a postsynaptic neuron’s dendrites, is a specialized structure called a synaptic release terminal. The terminal contains tiny spheres packed full of glutamate, and all of the machinery necessary to dump this contents onto its neighbor in response to an action potential. On the postsynaptic side there are structures called spines that contain proteins called ‘glutamate receptors’. These receptors, like all proteins, are tiny nanomachines. The particular function of these machines is to sense glutamate in the synaptic cleft and open a tiny tunnel between the inside of the postsynaptic neuron and the outside world. Not just anything can flow through this tunnel; only small atomic ions can fit (they’re part of a subset of proteins called ion channels). The glutamate receptor has evolved to let mostly sodium ions (Na+) into the cell. Since sodium ions carry a positive electric charge, bringing them into the postsynaptic cell stimulates it. If enough glutamate receptors open, then the postsynaptic cell fires its own action potential, signaling it to release its own neurotransmitters. Your 100 billion neurons are doing this to their neighbors all day, every day.


Figure 2: LTP and LTD       A) When a presynaptic action potential predicts the occurrence of an action potential in the postsynaptic cell, LTP occurs. This is because neurons want to strengthen the connection between cells which promoted their own firing. To do this, neurons take glutamate receptors that are sitting in vesicles inside the postsynaptic cell and insert them into the outer membrane at the base of a synaptic spine. After insertion, the new glutamate receptors migrate laterally to the synapse where they act to increase the postsynaptic neuron’s sensitivity to glutamate. B) The reverse process, called LTD occurs when presynaptic firing does not lead to postsynaptic firing. The postsynaptic neuron is listening to many other neurons and it doesn’t want to focus too much on things that aren’t associated with its own action potentials. So the cell causes glutamate receptors to move laterally to the base of the spine, where they are internalized, reducing the sensitivity to glutamate.


The concept of LTP is often described by the phrase “cells that fire together wire together”. This form of plasticity happens when the presynaptic cell’s glutamate release predicts the postsynaptic cell’s firing of an action potential.  When this correlated activity occurs, new glutamate receptors are added to the postsynaptic side of the synapse. Additional glutamate receptors make the synapse more responsive to its presynaptic input; the same release of glutamate now has a stronger effect on the postsynaptic cell.

The converse, LTD, happens when the presynaptic cell’s glutamate release is anti-correlated with the postsynaptic cell’s firing. If the presynaptic cell fails to cause, or predict firing in its partner, the postsynaptic cell removes glutamate receptors from the synapse. Thus, a fixed amount of glutamate release has a reduced impact on the excitability of the postsynaptic neuron.

I think it’s important here to take these new concepts and put them into proper perspective. The 100 billion neurons in the brain organize into local networks that process certain types of information. For example the smells of the ocean are processed in the olfactory cortex, and the emotions of that experience are processed in the amygdala. These areas that process specific features of the world communicate information across the central nervous system giving rise to all human experience. Not only are there an incomprehensibly large number of neurons in the brain, the average neuron integrates thousands or even tens of thousands of synaptic inputs. This YouTube movie from Steven Smith’s lab in Stanford makes the point quite well by labeling every synapse in a 1 millimeter cube of mouse brain; each red dot is a single synapse. For a sense of scale, the human brain is 1,450,000 times larger than the area shown in the video. My point here is that it is impossible to use our own intuition to make sense of how changing the strengths of synapses relates to smelling ocean air and reminiscing of days past. This isn’t like taking a car engine apart and seeing the pistons. The complexity of the brain, is perhaps, too much to be understood by itself.

So why do neuroscientists believe that synaptic plasticity has anything to do with learning?To conceptualize how this might work I ask first a willful suspension of disbelief. This is a useful thought experiment, but it is a drastic oversimplification   

Many people are familiar with the Russian physiologist, Ivan Pavlov, and his work with dogs. Pavlov found that the presentation of food to dogs caused an automatic salivation response. If he repeatedly paired the giving of food with the tone of a bell, eventually the dogs would salivate in response to the bell alone. This phenomenon is called ‘classical conditioning’. Let’s imagine that a tone neuron and smell neuron are connected to a salivation neuron. Initially the connection between the smell neuron and the salivation neuron are strong due to the canine evolution. Anything palatable makes the dog drool. Additionally, there’s a weak connection from the ears to the mouth. Most sounds don’t make the dog salivate. However, if the particular neurons that carry the information about the tone predicting food delivery correlate with firing of the drooling neurons, over time LTP will allow the sound to drive salivation on its own. The strengthening of the connection between the ‘tone’ neuron and the ‘drool’ neuron allows a learned experience that the tone predicts food.

That’s an instructive fantasy, but what is the real evidence that connects synaptic plasticity and learning? First, manipulations that disrupt synaptic plasticity also disrupt memory. Second, learning induces synaptic plasticity. Finally, manipulation of neurons that underwent plasticity in response to learning can activate or erase memories. On the other hand, artificial induction of LTP doesn’t create synthetic memories on its own. Our understanding of the precise nature of the relationship and plasticity is still incomplete.

Another connection between synaptic plasticity and memory is that both memory and plasticity processes become disrupted in Alzheimer’s disease and other diseases in which cognition is decreased. Alzheimer’s disease is a disease of neurodegeneration, the death of neurons. Interestingly, memory impairment and dysfunctional synaptic plasticity precede the loss of cells. Whether or not these early stages of the disease cause later degeneration is unknown and an active area of research

Here in the Finkbeiner lab, we study the mechanisms of synaptic plasticity. Our focus has been on two molecules: the activity-regulated cytoskeletal protein, Arc, and an enzyme called protein kinase D1 (PKD1). Arc is a master regulator of synaptic plasticity; it is critical for sustaining changes that occur in LTP and LTD, as well as long-term memory. PKD1 is poorly studied in the nervous system. We have found that it is a positive regulator of LTP and LTD induction, and that it is critical for certain forms of memory.


This post is meant to be the opening of a larger conversation between readers and scientists. For the sake of communicating the essence of this topic, I introduced several concepts without digging in to the details. Below you’ll find a poll. Please vote for the ideas you’d most like to hear more about, and we’ll focus our future posts based on your feedback. Thanks for participating in our experiment!



Article by Mathew Campioni, Postdoctoral Researcher