A New Theory Behind How And Why Neurons Work: Cellular automata

A New Theory Behind How And Why Neurons Work

Keywords—entropy; cellular automata; neurons; thermodynamic; entropy maximisation

I.     Introduction

While neurons share the same characteristics and make-up as other cells of the body, their unique electrochemical aspect allows them to transmit signals throughout the body. Neuronal communication, therefore, is an electrochemical process that involves different transmitters responsible for different functions. Until recently, the nature of this transmission of signals remained elusive, with numerous, some groundbreaking, suggestions such as the Hebbian theory, which postulates that signal transmission is the result of increased synaptic efficacy that results from the persistent stimulation of the postsynaptic cells by the presynaptic cells. A recent study by Jennifer Flack and Nell Watson, titled A New Theory Behind How and Why Neurons Work attempts to solve this age-old question by suggesting the existence of a powerful connection between entropy maximization and numerous other fields that include neuroscience, cognitive science, and machine learning, among others [1]. With this understanding, the authors have proposed a theory that goes beyond Hebbian theory, that postulates that the functions and processes of neurons emerge from microscopic ground truths, and reiterates the likelihood that the latter are responsible for all the universe’s intelligent behaviors.

II.    Entropy maximisation

In A New Physics Theory of Life, Wolchover reports on a new scientific theory that seeks to explain the existence of life, one that argues that a group of simple atoms exposed to energy and surrounded by a heat bath (such as the atmosphere) will undergo a gradual restructuring in such a way as to dissipate increasingly more energy [2]. The implication of this kind of restructuring is that when exposed to certain conditions, matter will inexorably acquire life’s key physical attributes. Drawing from this observation, Flack and Watson noticed some potential in the application of thermodynamic (entropic) principles that could optimise NEAT by aiding mutation and evolution processes. Designing neural networks by leveraging principles of Entropic Computing, they argued, would allow the application of thermodynamic optimisation to generate machine intelligences which are hyper-optimised for specific use cases. Accordingly, the authors defined entropy maximisation as a process that aids “a ‘unit’ to see multiple possible futures, select the most preferable, and take the necessary steps to bring it into being,” [1].

The core driver behind this revolutionary idea is entropy, the gradual degradation of matter and energy present in the universe to a final state of inert uniformity. This process of energy dissipation governs all interactions that exist in the universe. Consequently, by examining complex adaptive systems such as the brain or life through the lens of entropy maximisation, one can develop a deeper, more refined understanding of these systems. Applying this entropic perspective to life, the authors note that life, while successfully manages to keep its internal entropy low, does so by ensuring that its external entropy increases. For life-like processes, progression tends to be very directional in time, i.e., while a single cell may self-replicate into two new cells, no two existing cells may fuse to form one cell. Thus, processes involving the change in the configuration of matter rely on this time directionality, which depends on directional flow of energy. Note that this directionality is also what allows life to grow in complexity, leading to the formation of more complex organizations over time, which are capable of executing more complex functions.

According to Flack and Watson, life may actually be a catalyst for entropy production, I physical terms, living things show certain abilities that are more life-like. Life is capable of replicating, of harvesting energy from the environment and of anticipating the future based on past and present knowledge. Therefore, in a bid to further increase the rate of energy dissipation, entropy will favour the creation of increasingly more complex structures that are more efficient at dissipating energy compared to inert matter. A study by England established that structures formed through reliable entropy production in a time-varying environment seem adapted to consuming energy from the environment [3]. The study shows that based on the 2nd Law of Thermodynamics, the growth rate and durability of self-replicating life forms tend to constrain the minimum amount of chemical energy required for growth. As such, a randomly-wired chemical network will, through a spontaneous process, discover a stable, finely-tuned means of extracting chemical energy from its environment.

 

III.   Entropy maximisation and intelligent behaviour: cellular automata

The 2nd Law of Thermodynamics dictates that a complex system will always evolve towards a state of disorderliness. However, as recent studies indicate, this process reveals a possible deeper connection between entropy maximisation and intelligence behaviour. One such study is that of Wissner-Gross and Freer, that argues that any mechanical system that adheres to the dictates of the second law tends to show features of ‘intelligence’ that point towards an implicit connection between what is mostly a human attribute and fundamental laws of physics [4]. The authors propose a ‘causal path entropy’ based not on the internal arrangements that a system can access at any moment, but on the number of arrangements it is likely to pass through on the way to possible future states. In sharp contrast with the usual entropy, no known fundamental laws support the idea that this future-looking entropic force dictates the course of a system’s evolution. Through experiment, however, the authors discovered that systems would tend to seek configurations that maximised their ability to respond to further changes, something they interpreted as a rudimentary form of adaptive intelligence. Thus, they managed to calculate a ‘causal entropic force’ that pushes the system to evolve in such a way as to increase the modified entropy.

Consequently, they managed to deduce that sophisticated behaviours would emerge from this simple physical process. From this, it appears that intelligent behaviour may not just be connected to entropy maximisation, but that it may be emerging directly from it. While this formulation should not be taken to represent a literal model of intelligence development, despite the fact that the observed sophisticated behaviours resemble the human cognitive niche, it only suggests “a potentially general thermodynamic model of adaptive behaviour as a non-equilibrium process in open systems,” (4). Simply put, it offers a thermodynamic picture of what intelligence is, a drive to maximise future freedom of action. Intelligence, therefore, does not just try to acquire, but rather is the process of acquiring as much control of the environment as possible.

IV.   Impacts on life

What this new theory tries to communicate is that life itself is an entropy maximisation process, and that all life’s intelligence and complex behaviours emerge from this process. Intriguing as these findings were, Flack and Watson, being cognitive scientists, sought to find if these concepts were applicable to understanding the human brain. They approached this undertaking via the concept of homuncular functionalism, which proposes that cells have their own agency and that human actions are actually the result of collective cell agency. In turn, this shifted the discourse of developing the theory from perceiving neurons simply as processing units to understanding them as entities with unique drives. Taking the perspective that each neuron agency allowed Flack and Watson to take the perspective that each neuron is in fact an entity that strives for entropy maximisation, and hence seeks to maximise its future freedom of action.

While such an argument seemed to make sense on a deeper level, it still left open the question on how firing maximises future freedom of action. By default, the authors were aware of the fact that movement in space is not the ultimate goal of future freedom of action, but rather the amount of options available to each neuron. As Wissner-Gross and Freer showed with their inverted pendulum experiment, the pole spontaneously assumed the inverted position (see figure 1 below) because the position allowed it to maximise its options. The inverted position gives the more potential energy, which can be dropped to any direction to transfer the energy to another level. If the pendulum were to remain in the upside down, it would require more effort and time to move to any other position. With this, Flack and Watson understood that it was possible to represent firing as a form of maximising freedom. However, this failed to explain how the occurrence maximised freedom.

Fig. 1. Causal entropic forcing. Courtesy of Wissner-Gross and Freer, 2013.

It was also clear that a correlation between neural activity and entropy maximisation existed. This conviction emerged not only because neurons burn more energy when firing, but also because earlier research on thermodynamic of learning in neurons showed a clear correlation between the rate at which neurons learn and the amount of heat and entropy they produce during the process. The study showed that the efficiency of learning is bounded by the total entropy production registered by a neural network, i.e., the slower a neuron learns, the less it produced heat and entropy, conditions that increased its efficiency [5].

A closer examination of the Hebbian theory that argues that a cell’s persistent reverberatory activity often induces lasting changes to its stability, and effects surrounding cells, led to the postulation that neurons, through acts of firing, could actually be trying to control other neurons. Neurons, it appeared, are not only trying to maximise energy by firing and burning energy, but also by prompting surrounding cells to fire and burn even more energy. As envisaged in the Hebbian theory, causality plays an important role in learning process. A cell will not seek to strengthen its connection simply because another cell managed to fire simultaneously. Rather, it will strengthen its connection if it perceives to be having an influence over the other cell. What this finding reveals about firing is that the act is not a method of information processing per se, but rather a tool used by cells to influence others. As such, the processing of information is a secondary property that emerges from this activity.

Note that Hebbian theory emphasised causality in the connection (often termed seeking) process. What this means is that while there may exist a neuron with increased activity, the result of numerous neural connections or enhanced sensory stimuli, but which another may decide not to reach out to, due to the simple reason that it does not cause it to fire as strong as another. Drawing from entropy maximisation goal, this makes in that a neuron will seek and favour only those sources over which it has more influence, even if it has to dissipate more energy. Since entropy maximisation is the goal, the neuron will seek and grip onto the one that causes it to fire more, because it sees such connection as enabling it to more energy compared to any other available connection. However, while neurons opt-in to participate, they make selective choices of their collaborators, a process that some studies suggest might tentatively bee mediated by glial cells.

Now, if it is possible for other neurons to replicate this firing process along a chain, then it is possible for a neuron to change its firing patter, in turn affecting multiple others in the chain to varying degrees. Through this theory, Flack and Watson predicted the likelihood of replicating the functions of neurons with something entirely different. That anything usable as a tool to influence other agents make up a system. As predicted, it turned out that bacteria can communicate electrically. Electrical bacterial communication, as shown by Humphries et al., utilize biofilms in the propagation of electrical signals, a perfect example of collective agency. In this study, the authors observed that ion channels (which are small pores found on the cells) allow in and out movement of electrically charged molecules, which in turn allows potassium ions to ripple through the entire biofilm [6]. Unlike in neurons where signals are sent out in directed channels, the bacteria send them out as a mass impulse.

V.    Implications

If bacteria can send electrical signal via a biofilm, then the same can occur via any other slime, which could mean that neurons do not really matter. This new understanding of signal transmission allows for extension to other fields, such as the creation of better AI using the same concepts. The focus, however, shifts to a much bigger goal, that of speeding up calculation times by combining cellular automata, entropy maximisation, and possibly, quantum walks. Furthermore, with cellular automata, it could be possible to create a complex adaptive system that incorporates emergent intelligent behaviour. The new theory also shows that what matters more may not be the ability to replicate exactly how neurons work, or even manage to send signals in the manner that neurons do, but rather the ability to have a system such as cellular automata leverage the entropic principles to evolve its own methods. Essentially, it may be possible to create a system out of cellular automata, and hence achieve AGI, or leverage cellular automata to design a system better than the brain.

Also worth exploring is the idea of calculating the entropy production of these systems. Here, one can use the computer to measure entropy production associated with these systems. Computer processes ought to have real physical entropic effect on the computer, one that is measurable. The fact that computers consume electrical energy and produce heat implies that computer processes are not themselves separate from the computer. That the computer processes consume energy, according to Flack and Watson, is enough reason to enable them produce a measurable amount of entropy. In fact, recent studies concerning reservoir computers reveal their deep relationship to entropic processes, and even to the brain. For instance, Bubnof reports of a recently built mesh-like computer capable of organising itself out of random chemical and electrical processes [7]. However, while such organisation is not reminiscent of the brain, it does perform simple learning and logic operations.

VI.   Conclusion

The theory proposed by Flack and Watson postulates that neurons are entities with individual agency. As such, neurons are constantly seeking to maximise their future freedom of action, i.e. they strive to maximise their entropy. As such, the firing of a neuron is a process aimed at maximising entropy achieved not only by making others fire and burn energy, but also by making surrounding neurons to fire and burn more energy. By influencing another neuron to fire and burn energy, one neuron is actually attempting to exert control over another. The firing, therefore, is a tool used by a neuron to influence other cells, with the processing of information simply being an emergent property of this activity. Given this finding, it might be possible to replicate functions of a neuron using say, slime, especially in the wake of the discovery that bacteria actually communicate electrically via biofilms.

References
  • Flack, and N. Watson. A new theory behind how and why neurons work. October, 2017. Retrieved from

http://ideastwctw.blogspot.co.uk/2017/10/a-new-theory-behind-how-and-why-neurons.html

  • Wolchover. A new physics theory of life. January, 2014. Retrieved from https://www.scientificamerican.com/article/a-new-physics-theory-of-life/
  • J. Win. Dissipative adaptation in driven self-assembly. Nature Nanotechnology, vol. 10, pp. 919-923, 2015.
  • D. Wissner-Gross, and C. F. Freer. Causal entropic forces. American Physical Society, 110 (168702), 2013. DOI: 10.1103/PhysRevLett.110.168702.
  • Zyga. The thermodynamics of learning. February, 2017. Retrieved from

https://m.phys.org/news/2017-02-thermodynamics.html

  • J, Humphries, L. Xiong, J. Liu, A. Prindle, F. Yuan, H.A. Arjes, L. Tsimring, and G. M. Suel. Species-independent attraction to biofilms through electrical signaling. Cell, 168 (1-2), 2017, pp. 200-209. https://doi.org/10.1016/j.cell.2016.12.014
  • Bubnof. A brain built from atomic switches can learn. September, 2017. Retrieved from

https://www.quantamagazine.org/a-brain-built-from-atomic-switches-can-learn-20170920/

Cellular automata,

Cellular automata,

Cellular automata,

Cellular automata