Brain-like neuronal computation

December 22, 2015 — October 3, 2023

learning
life
mind
neuron
probability
statistics
statmech

Neural networks which are more biologically plausible than what we typically refer to as (artificial) neural networks. Synthetic brains, if you’d like.

Figure 1

1 Forward-forward networks

Neural networks without backprop are “more” biologically plausible. Here is a class of such networks (Hinton, n.d.; Ren et al. 2022).

The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth serious investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.

2 NEMO

Christos Papadimitriou, How Does the Brain Create Language? | EECS at UC Berkeley

There is little doubt that cognitive phenomena are the result of neural activity. However, there has been slow progress toward articulating an overarching computational theory of how exactly this happens. I will discuss a simplified mathematical model of the brain, which we call NEMO, involving brain areas, spiking neurons, random synapses, local inhibition, Hebbian plasticity, and long-range interneurons. Emergent behaviours of the resulting dynamical system – established both analytically and through simulations – include assemblies of neurons, sequence memorization, one-shot learning, and universal computation. NEMO can also be seen as a software-based neuromorphic system that can be simulated efficiently at the scale of tens of millions of neurons, emulating certain high-level cognitive phenomena such as planning and parsing of natural language. I will describe current work aiming at creating through NEMO a neuromorphic language organ: a neural tabula rasa which, on input consisting of a modest amount of grounded language, is capable of language acquisition: lexicon, syntax, semantics, comprehension, and generation. Finally, and on the plane of scientific methodology, I will argue that experimenting with such brain-like devices, devoid of backpropagation, can reveal novel avenues to learning, and may end up advancing AI.

3 Spiking

TBD

4 References

Beniaguev, Segev, and London. 2021. Single Cortical Neurons as Deep Artificial Neural Networks.” Neuron.
Blazek, and Lin. 2020. A Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio].
Dabagia, Papadimitriou, and Vempala. 2023. Computation with Sequences in the Brain.”
Dabagia, Vempala, and Papadimitriou. 2022. Assemblies of Neurons Learn to Classify Well-Separated Distributions.” In Proceedings of Thirty Fifth Conference on Learning Theory.
Dezfouli, Nock, and Dayan. 2020. Adversarial Vulnerabilities of Human Decision-Making.” Proceedings of the National Academy of Sciences.
Friston. 2010. The Free-Energy Principle: A Unified Brain Theory? Nature Reviews Neuroscience.
Griffiths, Chater, Kemp, et al. 2010. Probabilistic Models of Cognition: Exploring Representations and Inductive Biases.” Trends in Cognitive Sciences.
Hasson, Nastase, and Goldstein. 2020. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron.
Hinton. n.d. The Forward-Forward Algorithm: Some Preliminary Investigations.”
Hoel. 2021. The Overfitted Brain: Dreams Evolved to Assist Generalization.” Patterns.
Lee, Leibo, An, et al. 2022. Importance of prefrontal meta control in human-like reinforcement learning.” Frontiers in Computational Neuroscience.
Lillicrap, and Santoro. 2019. Backpropagation Through Time and the Brain.” Current Opinion in Neurobiology, Machine Learning, Big Data, and Neuroscience,.
Ma, Wei Jin, Kording, and Goldreich. 2022. Bayesian Models of Perception and Action.
Ma, Wei Ji, and Peters. 2020. A Neural Network Walks into a Lab: Towards Using Deep Nets as Models for Human Behavior.” arXiv:2005.02181 [Cs, q-Bio].
Meyniel, Sigman, and Mainen. 2015. “Confidence as Bayesian Probability: From Neural Origins to Behavior.” Neuron.
Millidge, Tschantz, and Buckley. 2020. Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs.” arXiv:2006.04182 [Cs].
Mitropolsky, Collins, and Papadimitriou. 2021. A Biologically Plausible Parser.”
Ororbia, and Mali. 2023. The Predictive Forward-Forward Algorithm.”
Papadimitriou, and Vempala. 2018. Random Projection in the Brain and Computation with Assemblies of Neurons.” In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs).
Papadimitriou, Vempala, Mitropolsky, et al. 2020. Brain computation by assemblies of neurons.” Proceedings of the National Academy of Sciences of the United States of America.
Ren, Kornblith, Liao, et al. 2022. Scaling Forward Gradient With Local Losses.”
Robertazzi, Vissani, Schillaci, et al. 2022. Brain-Inspired Meta-Reinforcement Learning Cognitive Control in Conflictual Inhibition Decision-Making Task for Artificial Agents.” Neural Networks.
Saxe, Nelli, and Summerfield. 2020. If Deep Learning Is the Answer, Then What Is the Question? arXiv:2004.07580 [q-Bio].
Vanchurin, Wolf, Katsnelson, et al. 2021. Towards a Theory of Evolution as Multilevel Learning.”
Wang, Kurth-Nelson, Kumaran, et al. 2018. Prefrontal cortex as a meta-reinforcement learning system.” Nature Neuroscience.
Yuan, Xiang, Crandall, et al. 2020. Learning the generative principles of a symbol system from limited examples.” Cognition.