Models of networks of neurons, or neural circuits, help us understand how large populations of neurons can work together to perform tasks. Using models, theorists have learned how networks can sustain particular patterns of reverberating activity or generate oscillatory or chaotic firing.
Network models address such issues as how neurons in primary sensory areas produce their responses, how cortical maps form and are modified by sensory experience, how populations of neurons represent information collectively and combine multisensory data, how objects are identified and classified, how memories stored in modified synaptic strengths can be read out, how motor responses are generated, and how evidence is integrated and used to make decisions.
Appendix E discusses basic ideas of how feed- forward circuitry (circuitry with no synaptic loops) may underlie the response properties of simple and complex cells in the primary visual cortex, and how recurrent circuitry (circuitry with synaptic loops) can create a cell assembly with strong mutual excitation among its neurons, allowing the assembly's activity to persist in the absence of a stimulus (thus forming an attractor). Here we continue these discussions with examples from network modeling.
We will examine how recurrent loops change both the gain and dynamics of responses to inputs, how a fully recurrent network can have hidden within it an effectively feed-forward structure that allows changes in gain to be separated from changes in dynamics, and how these ideas can give insight into aspects of visual cortex function. We also show how the basic idea of attractors can be extended to create a model of decision-making. Before we consider these examples, we show what can happen when model neurons like the one discussed previously are linked together into a network.
Balanced Networks of Active Neurons Can Generate the Ongoing Noisy Activity Seen In Vivo
When constructing and analyzing large networks it is typically not practical to model the constituent neurons with a high level of detail. Instead, the multitude of synaptic conductances and the complexity of dendritic morphology seen in real neurons are usually distilled down to a bare minimum.
In network models individual neurons are often modeled as integrate-and-fire neurons (see Box F–1) but may also be modeled as firing-rate neurons, meaning that only the rate at which a neuron fires is modeled, and not the timing of individual spikes (Box F–2). We employ both neuron models here. Simplifying the descriptions of individual neurons allows us to focus on effects that arise through network interactions.
Box F–2 Firing-Rate Models
A network in which neurons and the interactions between them are described in terms of firing rates has two critical elements. The first is the relationship between the total synaptic current I that a neuron receives and its firing rate r. For current that is constant, this relationship is given in terms of a firing-rate function, r + F(I).
When the current varies with time we assume that the firing rate lags behind but approaches this function exponentially, with a time constant τ, so that
because this is a considerable simplification in the transformation from spiking to firing rate, the time constant τ does not have a straightforward biophysical interpretation and must represent the temporal response properties of the system as a whole, including (but not limited to) the effects of both membrane and synaptic time constants.
The second element needed to construct a network model is the relationship between I and the activity of other neurons in the network. The total current for each neuron is the sum of terms representing each of its inputs. The contribution of an individual presynaptic neuron to I is given by the product of its firing rate and a weight factor that characterizes the strength and type of the synapse through which it acts. The weights for excitatory synapses are positive whereas those for inhibitory synapses are negative.
If the network we are studying receives input from other areas outside the local network, this is included as an additional term in I that we denote by h.
As an example of a firing-rate model, consider a network of two populations of cells. A population of excitatory neurons all fire at a rate r E and a population of inhibitory neurons fire at rate r I. The external inputs to these two populations are denoted by h E and h I.
The strength of the synaptic connections between neurons of the excitatory population is denoted by w EE, that between the inhibitory neurons by w II, and the connections from the excitatory to the inhibitory and from the inhibitory to the excitatory populations have strengths given by w IE and w EI, respectively.
The resulting equations for the firing rates of the two populations are
Equations similar to these were used for Figures F–3, F–5 through F–7, and in Box F–3.
Box F–3 The Decision Model
In the decision network r a and r b denote the firing rates of the excitatory populations a and b, each representing a unique decision, and r I is the firing rate of an inhibitory population. As in Figure F–6A, we assume for simplicity that there is no excitatory coupling between the excitatory populations and that each receives identical input from the inhibitory population.
The equations of the model are:
A small amount of random (white) noise is added to the right side of these equations. To avoid having a third differential equation for inhibitory firing rates, we use the approximation that the inhibition responds instantaneously and assume that r I is proportional to the sum of the excitatory rates r a and r b.
Without input (ha + hb + 0) the firing rates of the two excitatory populations are equal and low. As a result, neither action is preferred or taken. This is the same no-decision state seen during the initial 200 ms period of the simulations in Figure F–6.
To induce a decision we introduce excitatory inputs corresponding to a sensory stimulus. Sufficiently large but equal inputs (ha + hb > 0) result in two stable states that correspond to the two decision states, ra > rb or ra < rb. The state corresponding to the previous no- decision outcome has now disappeared.
To introduce a bias into the decision, we make h a and h b different. For example, if h a is significantly larger than h b, only the stable state ra > rb survives. For smaller biases there are regimes in which two stable states correspond to the two decisions, but the system is more likely to enter one stable state than the other.
What happens if we link together a large number of integrate-and-fire neurons through excitatory and inhibitory synapses? Such models typically involve thousands or even hundreds of thousands of neurons connected randomly and sparsely so that the probability of any two neurons being connected is less than 10%. Excitatory and inhibitory neurons are included in roughly the 4:1 proportion seen in cortical circuits.
The activity in such networks takes a number of different forms as shown by Carl van Vreeswijk and Haim Sompolinsky and by Nicolas Brunel. When the synaptic connections are weak, the bulk of the neurons are silent; but some fire in steady, regular sequences of action potentials that are not synchronized with the activity of other neurons of the network (Figure F–2A). As the strengths of the synapses, both excitatory and inhibitory, are increased, the network can transition into a state where the silent neurons start to fire, and the action potentials appear in irregular, asynchronous patterns (Figure F–2B). This form of activity provides a model of the background activity seen in real neural circuits.
Activity in networks of spiking neurons.
The raster plots show a representative sample of neurons in a network of integrate-and-fire model neurons (only a fraction of network neurons are shown). Each row is a separate neuron, and each dot in a given row represents an action potential fired by that neuron. The voltage traces below the raster plots show a single representative neuron. The trace in part A denotes a tonically active excitatory neuron, while the traces in parts B and C are for nontonically active excitatory neurons. A tonically active neuron is one that receives a constant input current in addition to input from the network but is otherwise identical to the other excitatory neurons. The peaks of the action potentials are clipped at 0 mV.
Individual neurons may fire at regular intervals (regular firing) or at random times (irregular firing). In addition, neurons tend to fire independently rather than synchronizing their spike times (asynchronous firing).
A. In a weakly coupled network the tonically active neurons show regular asynchronous firing while all other neurons are silent.
B. A network with strong but balanced excitation and inhibition exhibits irregular, asynchronous spiking.
C. A network with excessively strong excitation shows seizure-like activity with the synchronous firing of a large fraction of the neurons.
The irregular, asynchronous activity depends on the sparseness of the synaptic connections in the network and on a balance between excitatory and inhibitory inputs received by a cell. These inputs arise from many other excitatory and inhibitory neurons that are themselves firing irregularly. Overall these inputs are balanced—on average, excitation and inhibition cancel—but constant fluctuations in input drive the cell to fire at irregular times. Thus the network self-consistently maintains irregular firing in all of its neurons. If the balance of excitation and inhibition is not maintained, a network-model form of epilepsy can arise (Figure F–2C). In this case the asynchronous activity is interrupted by gaps and periods of synchronous firing across the population. It is interesting that network models of ongoing spontaneous activity suffer quite easily from seizure-like activity, just like real neural circuits.
Irregular, asynchronous firing, much like the background firing seen in many cortical areas, can arise in network models with different patterns of synaptic connectivity, such as when the probability of connection between two neurons decreases with the distance between them or with the difference between their selectivities to stimulus properties. The major requirement is that inhibition must balance excitation, so that the mean input cannot drive the cell to fire and instead firing is induced by input fluctuations. In addition, connectivity should be sparse so that the firing of different cells does not become synchronized.
What can structured network circuitry achieve? In the following discussion we provide a few illustrative answers to this question. For this analysis we switch from a network of spiking model neurons to one in which the activities of network neurons are described by firing rates (Box F–2). Networks of spiking model neurons can display patterns of activity that cannot be reproduced by firing-rate models, but there is good agreement between the two types of models for the steady-state asynchronous activity we discuss here and in the following sections. Describing neuronal responses in terms of firing rates makes the mathematical analysis of neural networks much easier.
Feed-forward and Recurrent Networks Can Amplify or Integrate Inputs with Distinct Dynamics
The relative roles played by feed-forward and recurrent circuitry in shaping neuronal responses is a subject of debate in neuroscience. For example, neurons in layer IV of the primary visual cortex receive afferent inputs from the lateral geniculate nucleus of the thalamus, which relays visual information from the eyes, but they also receive abundant innervation from other cortical neurons. Are the tuning properties of these neurons—the dependence of their firing rates on stimulus parameters such as the orientation of a light or dark edge—determined mainly by feed-forward input from the lateral geniculate nucleus, or are they strongly shaped by recurrent cortical feedback?
The answer is not immediately obvious because feed-forward circuits and circuits in which tuning is strongly shaped by recurrence (recurrent networks) can produce the same types of response selectivity or tuning. Intracellular recording in vivo, in which voltage responses and their changes under experimental perturbations are studied, can more clearly distinguish the two types of circuits. However, given only firing rate responses, the differences between feed-forward and recurrent circuits are easiest to detect by examining response dynamics rather than static response properties. For this reason, here we discuss the dynamics of network responses in various forms of feed-forward and recurrent circuits.
A feed-forward circuit can modify input signals, creating a wide variety of response selectivities or amplifying a weak input without significantly altering response dynamics. For example, if one set of neurons forms strong excitatory synapses with another set, a weak input to the first set can yield a strong response in the second. This occurs with only a small dynamic change, namely the small delay required for the first set of neurons to integrate the input and produce spikes that propagate to drive the second set of neurons.
Similarly, recurrent circuits can modify input signals, but typically with larger dynamic changes. An example of this is a circuit that amplifies its inputs through recurrent excitatory loops. Consider a population of neurons that excites itself. The population's response to an external input can be significantly amplified because the recurrent excitation adds to the external drive. However, unlike feed forward amplification the recurrent excitation is accompanied by a general slowing of the population's response dynamics, which arises as follows.
A population responds to a pulse of input with a pulse of activity that then decays away. The recurrent excitation adds back some of the activity that would otherwise decay, slowing the decay of the population's activity. The population's response to a sustained input is similarly slowed; the ultimate response to input is amplified by a factor roughly equal to the degree of slowing (ie, a threefold slowing yields a threefold amplification) (Figure F–3A). This can be understood by thinking of the sustained input as a continuous sequence of pulse inputs. We imagine that the response to the sequence of pulses is just the sum of the responses to each individual pulse. (This is not quantitatively correct but provides a useful qualitative representation.) Because individual pulse responses decay more slowly, the overall level to which they sum is increased, but the rise of activity to this level occurs more slowly.
Responses in excitatory and inhibitory networks of firing-rate neurons.
A. Response of a purely excitatory recurrent network to a square step of input (hE). The blue curve is the response without excitatory feedback. Adding recurrent excitation increases the response but makes it rise and fall more slowly (solid red curve). The dashed red curve is a smaller copy of the solid red curve (scaled by a factor of 0.5) so that the time course of the solid red and blue curves can be compared more easily.
B. Response of a purely inhibitory recurrent network to a square step of input (hI). The blue curve shows the response without recurrent inhibition. Adding recurrent inhibition decreases the response but makes it rise and fall more rapidly (solid red curve). The dashed red curve is a larger (2X) copy of the solid red curve.
C. Response of an integrator network to two input pulses (hE). The response is the integral of the input and remains constant when the input is not present.
D. Response of the excitatory population in a mixed excitatory/inhibitory recurrent network to input to the excitatory neurons (hE). The excitatory population excites both itself and the inhibitory population, whereas the inhibitory population inhibits both itself and the excitatory population, thus providing feedback inhibition to the excitatory population. The blue curve shows the response without recurrent connections (ie, without recurrent excitation or feedback inhibition). Adding the excitatory and inhibitory recurrent connections increases the response amplitude with little change in its time course (solid red curve). The dashed red curve is a scaled copy of the solid red curve as in part A.
A network with recurrent excitation and inhibition can also have inhibitory loops through which a population of neurons inhibits itself. In this case responses are sped up rather than slowed down, and they are reduced in amplitude (Figure F–3B). The reduction in response amplitude occurs because the decay of the response to a pulse of input is accelerated: In addition to the decay that would otherwise occur, inhibition subtracts even more from the activity. Because the responses to individual pulses decay more quickly, the overall level to which they sum is decreased, but the rise of activity to this level occurs more quickly.
If recurrent excitation is increased to the point where activity set up by a transient input can sustain itself indefinitely, decay does not occur at all and the response is infinitely slowed. This requires fine-tuning of network parameters. The resulting circuit, known as an integrator network, has some interesting properties. The response of an integrator network to a transient pulse of input is a change in firing rate that lasts forever in the absence of further input but which becomes part of an ongoing integral if further input is applied (Figure F–3C). If the transient excitation in the network is not perfectly tuned but instead is slightly weaker, the input produces a change in firing rate that decays very slowly. Such approximate integrators are used to model neural circuits that remember signals.
What happens if the amount of excitation is increased beyond the point of perfect tuning that achieves an integrator? The examples shown in Figure F–3 all start and end in a resting state with zero activity. When excitation is overly strong, such a resting state, whether or not it is characterized by zero activity, becomes unstable. This means that after any small perturbation, induced for example by a transient input, the system will drive itself further away from this state rather than relaxing back to it. Nonlinear processes ultimately stabilize a new pattern of activity, which is determined primarily by the network's own recurrent excitation rather than by the input and which can be self-sustaining in the absence of an input.
Fixed patterns of activity established and maintained by recurrent circuitry are attractors, as described in Appendix E. Attractors are often used as models for the persistent neural activity thought to hold items in working memory. If many different activity patterns are each strongly self-excitatory, a network can generate far more complex, chaotic dynamics. It has been argued that this can create a rich set of temporal patterns in areas of cortex such as primary motor cortex that can be harnessed by signals from motor planning areas to drive complex movement patterns.
Balanced Recurrent Networks Can Behave Like Feedforward Networks
Anatomical and electrophysiological studies of neurons in layer IV of the primary visual cortex (and in other sensory areas) have revealed many more recurrent than feed-forward connections in this layer. This discovery might seem to rule out feed-forward networks as relevant models of cortical circuits. However, function need not follow numbers.
A prominent hypothesis, supported by considerable evidence, is that the feed-forward inputs provide the driving input to layer IV neurons while the recurrent inputs amplify and modulate but do not drive responses. Feed-forward circuits are also relevant in another way: Under appropriate circumstances, strongly recurrent circuits can act effectively in a feedforward manner.
To see how this works, we study a network consisting of two coupled populations, one excitatory and one inhibitory. To simplify the discussion we let the two projections from the excitatory population (to itself and to the inhibitory population) have the same strength, and likewise the two inhibitory projections. Thus we can speak of the strength of excitation (or inhibition)—meaning excitation (or inhibition) onto both excitatory and inhibitory neurons—without having to distinguish multiple projections of each type. Although this simplification dictates the specific results, it does not affect the overall conclusions as to how recurrent circuits can act in a feedforward manner and thus dissociate amplification from changes in dynamics.
We suppose the network is at a fixed point, meaning that there are steady excitatory and inhibitory firing rates in response to a steady feed-forward input, and that this fixed point is stable—after small transient perturbations the network will return to the fixed point. We suddenly increase the level of feed-forward input to the excitatory population (Figure F–3D). In this recurrent network the excitatory activity is amplified by the recurrent circuitry. Surprisingly, however, the time course of the increase is little different from that without recurrent circuitry. With different parameters the recurrently amplified response can exactly match the timing of, or even be faster than, the response without recurrent circuitry. Thus the recurrent circuit can amplify responses without any slowing of the temporal dynamics.
Apparently the mechanism of amplification is different from the recurrent amplification we considered previously. What is this mechanism? At the fixed point the difference between the excitation and inhibition received by a population is exactly that required to sustain the population's firing rates. Any change in the balance between excitation and inhibition drives a change in the firing rates until a new balance is restored. Increasing the feed forward input to the excitatory population shifts the balance toward excitation, thus driving up both excitatory and inhibitory firing rates. Similarly, an excess of inhibition drives down both excitatory and inhibitory firing rates.
Let us represent the firing rates as differences from the fixed-point firing rates. In this representation firing rates may be either positive or negative. We can then formally represent any pattern of excitatory and inhibitory firing rates as a weighted combination of two activity patterns. In the differential pattern excitatory and inhibitory cells have equal and opposite firing rates. In the common pattern excitatory and inhibitory cells have identical firing rates. Given some excitatory and inhibitory firing rates, if we weight the common pattern so that its common firing rate is the average of the excitatory and inhibitory rates, and weight the differential pattern so that it captures the difference between excitatory and inhibitory firing rates, then the sum of the two weighted patterns equals the given firing rates.
The advantage of expressing the activities in terms of these two patterns is that it allows a deeper insight into the dynamics. A shift in the network's balance toward excitation involves an increase in the size of the differential pattern, that is, the signed difference between excitatory and inhibitory firing rates increases. This imbalance in turn drives an increase in the size of the common pattern, that is, both excitatory and inhibitory firing rates increase. Similarly, a shift in the balance toward inhibition involves a decrease in the differential pattern, which decreases the common pattern. The network thus behaves precisely as it would if the differential activity pattern made a feedforward synaptic connection to the common activity pattern, that is, imbalances drive balanced responses. Furthermore, there is no corresponding feedback from the common pattern onto the differential pattern. Balanced responses do not drive imbalances, nor does the differential pattern act on itself—imbalances do not drive further imbalance.
Thus the network of Figure F–3D, which appears to be fully recurrent when viewed in terms of the neurons, can be seen to have a hidden feed-forward structure when viewed in terms of differential and common activity patterns (it is hidden in the sense that it is not readily apparent in the synaptic connectivity of the network). This feed-forward pathway allows one activity pattern to excite the other without feedback (Figure F–4). The amplification driven by this "hidden" feedforward connection occurs with little dynamical slowing, whereas amplification as a result of a self-excitatory loop is achieved at the cost of slowing of the dynamics. This description is mathematically precise when the relationship of a neuron's firing rate to its input (see Box F–2) can be taken to be linear, and it provides a useful intuition for understanding this type of network behavior more generally.
An excitatory/inhibitory recurrent circuit is equivalent to a feed-forward circuit from a differential pattern (E – I) to a common pattern (E + I).
The input to E in the recurrent circuit becomes equal input to both the E – I and E + I patterns. The inhibitory loop within the E + I pattern arises if the inhibitory projections are stronger than the excitatory.
The strength of the amplification depends on two factors. One is the strength of the feed-forward connection from the differential to the common pattern, which is given by the sum of the excitatory and inhibitory synaptic strengths. The other is the strength of any remaining feedback loops. If we assume that inhibition is stronger than excitation, this results in a loop by which the common pattern inhibits itself (Figure F–4), because raising both excitatory and inhibitory rates produces a net inhibition of both the excitatory and inhibitory populations. This self-inhibition of the common pattern suppresses rather than amplifies responses; its strength is given by the difference between excitatory and inhibitory synaptic strengths. Thus the strongest amplification arises when both excitation and inhibition are strong but reasonably balanced.
This form of amplification can occur for each of many different spatial patterns of activity in a network of many neurons. In each spatial pattern an imbalance of excitatory and inhibitory activity can provide feed-forward drive to common excitatory and inhibitory activity, with different spatial patterns having different strengths of this drive and thus different degrees of amplification. This mechanism has been proposed to underlie observations of spontaneous activity in the primary visual cortex of anesthetized cats (that is, activity in the absence of a visual stimulus).
Despite the absence of a visual stimulus, spatial patterns of activity resembling responses to structured visual inputs make a larger contribution to the overall spontaneous activity than patterns unrelated to these visual responses. If we think of spontaneous activity as being driven by unstructured inputs to the visual cortex that equally drive many different patterns, then the patterns that resemble visually driven responses are being amplified by the visual cortical circuit more than other patterns. What is the mechanism underlying this amplification?
Preliminary analysis suggests that the dynamics of the amplified patterns are not significantly slowed. In a model network with balanced excitatory and inhibitory connections that preferentially target cells with similar orientation selectivity, the spatial patterns that resemble visually driven responses have the largest effective feed-forward weights and so are amplified relative to other spatial patterns, without dynamical slowing, by the mechanism shown in Figure F–3D. This provides a possible explanation for the amplification of activity patterns that resemble visual responses in the absence of a visual stimulus.
Paradoxical Effects in Balanced Recurrent Networks May Underlie Surround Suppression in the Visual Cortex
In this section we discuss another effect that can arise in networks in which excitation and inhibition are both strong but relatively balanced. We consider a network that satisfies two criteria.
First, the excitatory recurrence is strong enough to make the excitatory network unstable by itself. That is, if the network is at a stable fixed point, and if inhibitory firing rates were kept frozen at their fixed-point levels, then after small perturbations of excitatory firing rates the excitatory network would drive itself even further away from its fixed-point rates. Second, feedback inhibition (inhibition driven by the excitatory cells) stabilizes the network. A slight change in excitatory firing rates drives a sufficient change in inhibitory firing rates to push the excitatory rates back to their fixed-point levels, despite the tendency of the excitatory network to "run away" on its own.
We refer to a network meeting these two criteria as an inhibition-stabilized network. The strong recurrent excitation received by excitatory cortical neurons and the instability of cortical activity when inhibition is blocked suggest that cortical circuits may indeed be stabilized in this way.
Inhibition-stabilized networks provide a possible explanation for a paradoxical experimental observation. The region of visual space within which an appropriate visual stimulus can elicit a response in a neuron in the primary visual cortex (V1) is known as the center region of the cell's receptive field. For many V1 neurons, increasing the size of the stimulus so that it also covers the surrounding region (the surround) reduces the response, a phenomenon known as surround suppression. However, a stimulus covering only the surround and not the center yields no response.
It is believed that stimulation of the center of a neuron's receptive field drives external excitatory input relayed from the eyes to both excitatory and inhibitory neural populations in the local cortical circuit, whereas a surround stimulus excites neighboring regions of cortex that send excitation more strongly to the inhibitory population within the local circuit. We might therefore expect that a stimulus in the surround should increase firing in the local inhibitory population, which in turn would suppress responses in the excitatory population.
Instead, experiments by David Ferster and colleagues indicate that a surround stimulus reduces both the excitation and the inhibition that a V1 neuron receives. That is, surround suppression is actually mediated by a reduction in the excitation of a cell, which has a larger effect than a concurrent reduction in inhibition. The reduction in inhibition suggests that the firing rate of the inhibitory population, like that of the excitatory population, is reduced by the surround stimulus, and this has been directly confirmed by Xue-Mei Song and Chao-Yi Li. Thus we arrive at the paradoxical result: A surround stimulus that is believed to drive external excitation to an inhibitory population causes a net decrease in the inhibitory population's firing rate.
This paradox can be explained by the presence in the cortex of strong recurrent excitation that is stabilized by inhibition, as shown in a model constructed by Misha Tsodyks and colleagues in a different context. The inhibitory neurons receive such strong drive from the excitatory neurons that their activity is determined more by the excitatory neurons than by any external input.
To see this, consider an inhibition-stabilized network composed of two populations of neurons, one excitatory and the other inhibitory (as in Figure F–3D), each initially firing at steady rates in response to a constant center stimulus (Figure F–5A). Adding a surround stimulus provides additional excitation to the inhibitory neurons from external sources. This transiently increases inhibitory neurons firing (Figure F–5B), which in turn lowers the firing rates of the excitatory neurons in the network (Figure F–5C), causing a withdrawal of recurrent excitatory input to the inhibitory neurons. Precisely when the network is inhibition-stabilized, this withdrawal of recurrent excitatory input to the inhibitory neurons exceeds the increase in excitation from external sources that started the process, so that the ultimate result, paradoxically, is that inhibitory firing rates are also lowered (Figure F–5D).
Sequence of events following addition of a surround stimulus to a center stimulus in an inhibition-stabilized network model of primary visual cortex.
The circuit consists of a population of excitatory neurons (E) that recurrently excite one another, and a population of inhibitory neurons (I) that recurrently inhibit one another (red/pink synapses are excitatory, black/grey synapses are inhibitory). The excitatory cells excite the inhibitory neurons, which in turn provide feedback inhibition to the excitatory cells. Stronger colors indicate higher levels of activity of a neuron or synapse. At all times the network receives a steady input driven by a steady center stimulus (not shown). The plot below is a continuous-time plot of excitatory and inhibitory firing rates. The points in time at which conditions A–D occur are indicated. (Adapted, with permission, from Ozeki et al. 2009.)
A. The circuit before the addition of the surround input. The populations are firing at steady rates in response to the center stimulus. The surround input is not yet activated.
B. After the surround input is activated at 50 ms, inhibitory firing rates initially increase.
C. This additional inhibitory input drives down excitatory firing rates, resulting in withdrawal of recurrent excitation from both excitatory and inhibitory neurons and a corresponding decrease in inhibitory firing rates.
D. When the network is inhibition-stabilized, this withdrawal of recurrent excitation to the inhibitory neurons is larger than the surround-induced increase of external excitation. Thus, in the end the inhibitory neurons receive less excitation than they did initially, and accordingly their firing rate is decreased.
It is natural to think that, with inhibitory firing rates lowered, excitatory firing rates should then rise back above their initial levels, but this does not occur. To understand this we must understand one more point about an unstable excitatory subnetwork. Recall that an increase in excitatory firing recruits so much extra recurrent excitation that, in the absence of changes in inhibitory firing, it would drive excitatory firing still higher. So too a decrease in excitatory firing withdraws so much recurrent excitation that excitatory firing rates would fall still lower in the absence of changes in inhibitory firing. The lowering of inhibitory firing rates decreases feedback inhibition, thus compensating for the deficiency of excitation and so stabilizing the lower firing rates of the excitatory population. The network thus arrives at a new stable fixed point in which both excitatory and inhibitory cells have lower firing rates than they did before additional external excitation was added to the inhibitory population.
This paradoxical result, in which adding excitatory input to the inhibitory cells results in a decrease in their steady state firing rate, is actually another instance of a hidden feed-forward connection by which a small differential or imbalance between excitation and inhibition can drive a large common response of both excitation and inhibition. In the present case the addition of excitatory input to the inhibitory cells drives a negative imbalance, which in turn drives a large negative common response, that is, a decrease in both excitatory and inhibitory rates. However, the increase in input also directly drives an increase in inhibitory firing rates. Thus there are two competing effects on inhibitory firing. The instability of the excitatory subnetwork turns out to be precisely equivalent to the condition in which the feed-forward effect is larger than the direct input effect, so that the net effect is a decrease in inhibitory firing rates.
The effect shown in Figure F–5 matches the observations we described earlier. When a stimulus in the receptive field surround results in additional excitatory input to inhibitory neurons, both the excitation and the inhibition in the recorded neurons is reduced. This finding and the theoretical analysis of it (only a portion of which is discussed here) provide strong evidence that the cortex operates in a regime in which recurrent excitation by itself is strong enough to be unstable but is stabilized by recurrent inhibition.
Recurrent Networks Can Model Decision-Making
As a final example of network modeling, we turn to circuits that select between different behaviors, that is, circuits that make decisions. Suppose the driver of an automobile needs to decide whether to turn right or left. Assume that there are two populations of excitatory neurons, one active when the decision is a right turn, the other when it is a left turn. Under some circumstances no decision needs to be made, so neither population should be highly active.
A decision can be biased or unbiased by sensory input. If, for example, the sensory stimulus is a road sign that says "turn left," the sensory input should bias the decision toward a left turn. If the sensory stimulus is an obstacle in the middle of a three-lane highway, there is a need to turn but the direction may be arbitrary. In this case the sensory input should evoke a decision without biasing it. Between these extremes, inputs may provide a range of biasing effects.
To model decision-making in this context we follow the work of X. J. Wang, who has modeled decision- making circuits extensively. A network model of the kind of decision-making described above must have the following properties. First, in the absence of relevant sensory stimuli there should be a stable pattern of spontaneous activity corresponding to no decision. Second, a sensory stimulus requiring a decision should eliminate or destabilize the no-decision state and introduce two new stable firing patterns corresponding to the two possible actions. Third, sensory stimuli should be capable of biasing the outcome so that one of these decision states is more likely to occur than the other.
In our model different decision states are represented by two recurrently connected networks of excitatory neurons, both of which excite a single population of inhibitory neurons that return feed-back inhibition to both of them (Figure F–6A). In the absence of sensory stimuli the network stays in a no-decision state in which the neurons have low activity. In this state we assume that the neurons have a low level of input, and their activity reflects an equilibrium involving this input, recurrent excitation, and feedback inhibition. A stimulus that induces a decision takes the form of excitatory drive to both excitatory populations of the network.
A decision-making network.
Two excitatory populations of neurons are active during two different decisions. An inhibitory population receives excitation from both excitatory populations and returns inhibition to both of them. A stimulus is presented at 200 ms. Before this time the network is in the no-decision state, in which excitatory populations have low firing rates.
A. In response to an unbiased stimulus the firing rates of both populations initially rise but then separate. The firing rate of the orange population ends up at a high value because small random fluctuations (too small to see) raise it slightly higher than the firing rate of the purple population. This small difference was then amplified by the network, leading to a large difference in the two rates. Ultimately, only the firing rate of the orange population remains high, corresponding to the decision.
B. A stimulus biased in favor of the orange population generates a larger input to one excitatory population than the other. Note that the decision state is reached more rapidly than in the case of equal inputs (part A).
When there is no bias favoring one decision over the other, the inputs to the two excitatory populations are equal. Nevertheless, the model does make a decision—the firing rate of one population rises to and remains at a high level, while that of the other population falls back to a low firing rate after an initial rise (Figure F–6A). This occurs because the stimulus-generated inputs force both excitatory networks away from their no-decision firing rates. With the no-decision state eliminated, only two stable states remain available to the system, each corresponding to a different decision. One population ends up at a higher rate because small random fluctuations in the firing rates happen to favor that group (a small amount of noise was added to the model to generate these fluctuations). As a result of the fluctuations, each decision occurs 50% of the time.
When the stimulus is biased in favor of one population, the input to that population is higher than the input to the other. As a result, the firing rate of the favored population rises and remains high, while that of the other population falls after a brief and small rise (Figure F–6B). A strong-enough stimulus bias will produce the favored decision almost 100% of the time.
The latency from the time of the stimulus to the time a decision occurs can be determined by examining the divergence between the firing rates of the two neuronal populations. A decision is made more rapidly when the stimulus is biased than when it is unbiased (Figure F–6). This situation is similar to what is observed in the experiments on perceptual decision-making. In these experiments a monkey is trained to report the perceived direction of dots that move in many different directions on a screen. The monkey's performance depends on the coherence of motion of the dots. On most trials the dots have an overall tendency to move in one of two possible directions, introducing a type of stimulus bias. In some trials the coherence is zero, but the monkey still has to choose a direction of motion from these two possibilities. This situation is analogous to that shown in Figure F–6A.
There is experimental evidence that neural populations representing the two choice possibilities may be located in the posterior parietal cortex, specifically the lateral intraparietal area. Neurons recorded in this area by Michael Shadlen and collaborators during perceptual decision experiments involving moving dots behave like the model neurons in Figure F–6. Neural activity is initially driven by the sensory stimulus but then increases for one decision and decreases for the other. The final decision is preceded by a ramping activity that is relatively slow in the case of zero coherence of motion of the dots. This may be an indication of the existence of a period during which fluctuations are accumulating until a decision is made.
How does the model shown in Figure F–6 work and how was it constructed? The model relies on two key elements: bistability and inhibition-mediated competition between the two populations of excitatory neurons. Bistability is the ability of a network to sustain activity corresponding to either of two different states, typically one with a low rate of firing and one with a high rate. Persistent firing at a high rate is made possible by strong recurrent excitation. Bistability requires the level of activity of a neuron to depend on its recurrent input in a nonlinear way. In the example of Figure F–6 the nonlinearity is assured by a sigmoidal neuronal response curve that makes the firing rate relatively insensitive to changes in input both at low firing rates (when mean voltage is well below threshold) and at high firing rates (when rates saturate) but highly sensitive at intermediate firing rates. At an intermediate level of firing the high sensitivity to changes in input renders the excitatory feedback unstable, forcing the network to high or low firing rates. At high or low firing rates the excitatory feedback becomes stable because of the weakened neuronal sensitivity, allowing a stable firing rate.
In a bistable network low and high firing rates are stable in the presence of small transient input pulses. Even if these transient inputs modify the firing rate of the network, the firing rate returns to its initial state after the input pulse terminates. However, larger input pulses can induce transitions from one firing rate to the other (Figure F–7).
A bistable network can alternate between two different states.
The network starts in a low firing state. The first input pulse is too small to induce a transition from this state, but the larger second pulse flips the network into the high firing state. The third pulse is again too small to induce a transition, but the final pulse flips the network back to the low firing state.
The other crucial component in the model is inhibition. The decision model in Figure F–6 consists of two populations of excitatory neurons, each of which can be bistable, and an inhibitory population that is driven by both excitatory populations and reciprocally inhibits both of them. The purpose of the inhibitory population is to avoid the state in which both excitatory populations fire at high rates, which would correspond to making both decisions at once. This state is avoided by introducing sufficiently strong inhibition of the two excitatory populations. The resulting model is described mathematically in Box F–3. It and models like it provide an elegant way of simulating decision making, including such features as the time required to make a decision and the frequency of errors.
Laurence F. Abbott
Kenneth D. Miller