The brain is modeled by a set of brain areas, each with \(n\) neurons. Neurons within area are connected by a random directed graph; each directed edge is chosen independently with probability \(p\). Areas can be connected to each other by random directed bipartite graphs.

Neurons can be activated at discrete time steps based on the activity of neurons connected to them by synapses, which start with uniform weights that can change over time. At each time step, the \(k\) neurons in each area with the most weighted input are activated. Synapse weights change via Hebbian plasticity --- if neuron \(a\) and \(b\) fire in consecutive time steps, then the synapse connecting \(a\) to \(b\) (if present) increases its weight by a multiplicative factor \(\beta\).

Projection refers to the repeated activation of a set of neurons in one area (possibly an external sensory area with no recurrent connections) and this activates neurons in a different area; the activated subset converges to an almost stable subset, which is termed an assembly. Assemblies are more densely interconnected than the underlying area. Assemblies represent concepts, ranging from specific things like numbers or images, to more abstract memories like circles. Assemblies come with a repertoire of operations, the Assembly Calculus, which includes Projection, Recall, Association, Pattern Completion, and Merge.

Assemblies can be formed hierarchically. A constant number of brain areas with \(n\) neurons each can in principle simulate arbitrary Turing machine computations that take up to \(\sqrt{n}\) space. Perhaps more interestingly, these simple operations lead to learning of well-separated concept classes (one assembly per class), and a skeleton architecture for the parsing and generation of language.

Demo

References

Buzsáki G., The Brain from Inside Out (Oxford University Press, 2019).

Dabagia, M., Papadimitriou, C. H., & Vempala, S. S. (2021). Assemblies of neurons can learn to classify well-separated distributions. arXiv preprint arXiv:2110.03171.

Papadimitriou, C. H., Vempala, S. S., (2019, January). Random projection in the brain and computation with assemblies of neurons. In 10th Innovations in Theoretical Computer Science Conference.

Papadimitriou, C. H., Vempala, S. S., Mitropolsky, D., Collins, M., & Maass, W. (2020). Brain computation by assemblies of neurons. Proceedings of the National Academy of Sciences, 117(25), 14464-14472.

Assemblies Simulation Github Link

Contact

Christos H. Papadimitriou, Columbia University, christos@columbia.edu

Santosh S. Vempala, Georgia Institute of Technology, vempala@cc.gatech.edu

Seung Je Jung, Georgia Institute of Technology, sjung323@gatech.edu