Hafsteinn a Computer Scientist with a background in Comp. Neuro and ML

Leaky integrate and fire neurons (LIF)

This post assumes basic understanding of neurons. For a short introduction to neurons please see this Wikipedia article.

There are multiple existing models for biological neurons, some which even contradict each other. In this post we explore the leaky-integrate and fire neuron model (LIF) which is commonly used in simulations which mimic neural networks found in the brain. It has the advantage of being simple while also capturing the large scale dynamics of how a single neuron functions.

Note that not all of the cells in our brains spike (i.e. transmit digital signals) and behave like the one we discuss below. Most of the cells which we call neurons do spike however. It is commonly acknowledged that the signals they send amongst themselves form the basis of information dissemination and computation in the brain.

The LIF neuron corresponds to what is called a membrane voltage model. It is inspired by experiments, first carried out by Hodgin and Huxley in the 1950s (Hodgkin & Huxley, 1952), which measure the voltage difference inside and outside a neuron. The model is quite simple, we can give some input to the neuron, in the form of an electrical current, and we can observe how the voltage across the membrane changes over time.

Before the experiments of Hodgin and Huxley the integrate-and-fire neuron model (note that the leaky part is missing) had already been proposed. It is in fact a variant of the model we have used in the previous posts on bootstrap percolation. In that model a neuron never forgets the input it received in the past. If it requires two input spikes in order to spike itself it is even allowed that these two spikes come a year apart from each other. In reality this is a different story and any trace of the first spike would vanish a few milliseconds later. Note however that integrate-and-fire models are still useful, for example when modelling phenomena which occurs at a very short timescale.

The fact that neurons forget old spikes corresponds to the leaky part in the name. You can think of a neuron as a bucket. When it receives input a bit of water gets added to the bucket. When the bucket is full it spikes and that spike corresponds to adding a bit of water to all the neighbouring buckets. However, there is a catch, the bucket has a hole in the bottom so the water leaks out.

If you want to skip the math part now is the time to scroll down to the bottom to see a live, interactive simulation of a LIF neuron.

The water level in the bucket corresponds to the membrane voltage of the neuron. The neuron prefers to be a bit polarised and its resting potential (corresponding to an empty bucket) is at around mV. Whenever the neuron receives an input spike some channels will open on the membrane and ions will flow through which depolarises the neuron. The neuron has some ion pumps on the membrane which actively pump these ions back out which corresponds to the leak.

Since the membrane voltage changes over time we can model it using a differential equation. Such an equation tells us how the membrane voltage changes in one moment. We denote the voltage at time by and we denote the resting potential by (also known as the leak reversal potential). Additionally we denote the input to the neuron at time by and the excitatory reversal potential by which is the membrane voltage which the input is driving the neuron towards.

For a short time interval the voltage changes as follows

where and are positive constants. So you see that on the one hand the voltage, is drawn towards via the constant leak and on the other hand it is drawn towards via the input current (here modelled as conductance). One popular approach to model the input conductance is via exponential decay, the conductance then simply behaves as

where is also some positive constant. We additionally increase by some value whenver the neuron receives an input where corresponds to the weight of the incoming synapse. This still does not capture how the neuron spikes. As in the integrate-and-fire model it just needs to cross some threshold, which we denote by . So whenever exceeds the neuron emits a spike. Following the spike the neuron enters a refractory period where it ignores all incoming spikes. After that period the membrane voltage is reset to a fixed value, denoted by which is called the reset potential.

Below you can see an interactive simulation of a LIF neuron following the dynamics described above. You can press the input neuron on the left to send a spike to the output neuron on the right. You can visualise the membrane potential of the target neuron via the radius of the circle representing it. The dynamics of the membrane potential are 100 times slower than in reality (otherwise you would not see much happening) and the parameters as seen above are set to somewhat realistic values. Note however that the time to deliver the spike between the two neurons and the time in the refractory period is exaggarated for demonstration purposes. If you want to run large scale simulations using this type of neuron you can do so using the nest-simulator and the neuron model (Meffin, Burkitt, & Grayden, 2004) where the parameters are from.

References

  1. Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500.
  2. Meffin, H., Burkitt, A. N., & Grayden, D. B. (2004). An analytical model for the ‘large, fluctuating synaptic conductance state’typical of neocortical neurons in vivo. Journal of Computational Neuroscience, 16(2), 159–175.