Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android


  • Early Neural Network Modeling

  • Neurons Are Computational Devices

    • A Neuron Can Compute Conjunctions and Disjunctions

    • A Network of Neurons Can Compute Any Boolean Logical Function

  • Perceptrons Model Sequential and Parallel Computation in the Visual System

    • Simple and Complex Cells Could Compute Conjunctions and Disjunctions

    • The Primary Visual Cortex Has Been Modeled As a Multilayer Perceptron

    • Selectivity and Invariance Must Be Explained by Any Model of Vision

    • Visual Object Recognition Could Be Accomplished by Iteration of Conjunctions and Disjunctions

  • Associative Memory Networks Use Hebbian Plasticity to Store and Recall Neural Activity Patterns

    • Hebbian Plasticity May Store Activity Patterns by Creating Cell Assemblies

    • Cell Assemblies Can Complete Activity Patterns

    • Cell Assemblies Can Maintain Persistent Activity Patterns

    • Interference Between Memories Limits Capacity

    • Synaptic Loops Can Lead to Multiple Stable States

    • Symmetric Networks Minimize Energy-Like Functions

    • Hebbian Plasticity May Create Sequential Synaptic Pathways

  • An Overall View

By itself a single neuron is not intelligent. But a vast network of neurons can think, feel, remember, perceive, and generate the many remarkable phenomena that are collectively known as "the mind." How does intelligence emerge from the interactions between neurons? This is the central question motivating the study of neural networks. In this appendix we provide a brief historical review of the field, introduce some key concepts, and discuss two influential models of neural networks, the perceptron and the cell assembly.

Starting from the 1940s researchers have proposed and studied many brain models in which sophisticated computations are performed by networks of simple neuron-like elements. Most models are based on two shared principles. First, our immediate experience is rooted in ongoing patterns of action potentials in brain cells. Second, our ability to learn from and remember past experiences is based at least partially on long-lasting modifications of synaptic connections. Although these principles are widely accepted by neuroscientists, they immediately suggest many difficult questions.

For example, to our conscious minds, perceiving an object or moving a limb is experienced as a single, unitary event. But in the brain either act is the result of a collection of a stupendous number of neural events—the discharge of action potentials or the release of neurotransmitter vesicles—indiscernible by the conscious mind. How are these events united into a coherent perception or movement?

Storage of our immediate experience in long-term memory is presumed to occur with changes in synaptic connections. But how exactly is a memory divided up and distributed across many synapses? If some synapses are used to store more than one memory, how then is interference between memories avoided? When past experiences are recalled from memory, how might synaptic connections evoke a pattern of firing that is similar to a pattern that occurred in the past? Finally, when we reason, daydream, or otherwise float in the stream of consciousness, our mental state is not directly tied to any immediate sensory stimulus or motor output. How do networks of neurons dynamically ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.