John Conway's Game of Life

Getting Started

Steve Jones, March 27, 2023

Abstract

An idea in the early 1980's led Steve Jones to think about the representation of information in the brain and deduce a surprising amount of information about it. Over 40 years later, the hardware is finally reaching the point where those thought experiments can be tested at scale.

Introduction

As a college freshman I was thinking about how natural brains might represent the sights, sounds, tastes, touches, smells, and the history of all that, using just neurons, the best estimate at the time being around 1011 of them in a human brain. We didn't have much to go on as far as how the neurons were wired up, though the operation of single neurons as leaky-integrate-and-fire mechanisms was established. It worried me that there just simply weren't enough of them to represent our experiences, much less being able to play them back as memories. The working model of the brain was very much compartmented into active processing centers, and memory centers, though nobody could explain how those worked.

One of the things that bugged me was that our Von Neumann computer architectures with limited storage were completely different from natural brains, though the two were often compared; the brain was nothing more than a computer, it was said. The moment you start thinking about implementing the kinds of things a brain can actually do, using a Von Neumann computer architecture, you are faced with decisions that make you feel like you're heading in the wrong direction. For example, how is visual sensory information transmitted to the brain? Is it simply X/Y-encoded luminosity and color data? Where is the constant stream of visual data stored, so that we can recall it? Is video really stored as a pixel field on an X/Y grid somewhere? The best guesses of the day were that the processing centers worked feverishly on these raw images to extract and label features-- in line with the thinking of expert system architecture at the time. Even a limited 320x200x1 video frame from a camera takes about 64KB to represent in memory. That's a lot, considering that we get to watch a movie of this our entire lives. Is there a giant queue of frames that get tagged with labels when significant events happen? Maybe smells and tastes intersperse in there as well, with smaller datum widths. Was memory just a filing system that could keep all this information and replay those pixels and other channels? And what happens to the neurons that are reserved for later life-- they just remain on vacation until their time to store something comes up? No; this is all wrong; it can't work that way.

I later came across John Conway's Game of Life, the two-dimensional matrix containing cells, each of which had their next states in time dictated by a global arithmetical rule associated with the states of the nodes surrounding them. Patterns oscillate in two, three, or even dozens of steps in time, even moving about the matrix or emitting new patterns from "pattern guns". It seemed "alive"-- maybe like those real-time QEEG images showing electrical activity in the brain moving from place to place.

If you're not familiar with this, it's worth looking up. There is a two-dimensional matrix of cells, where each cell has eight neighbor cells. The ones on the left edge of the matrix can optionally be considered to be neighbors with the ones on the right edge, and similarly for the top and bottom. The system is clocked, so that all the states (on, off) of each cell in the matrix during one clock period are used to compute the values of all the cells in the next clock period (called an epoch). The rules can vary, but for example, if half or more of a cell's neighbors are on in epoch i, then that cell will be turned on for epoch (i+1), but if a cell is surrounded by all activated neighbors, it dies in the next epoch. Before starting the system, a few cells may be randomly seeded with on values, so that the matrix starts with a non-empty pattern. When the system clock starts, complex dynamic patterns emerge from the initial conditions. Sometimes the patterns fizzle out, but sometimes they settle down and persist as a group oscillation.

The collaboration of many cells in an oscillating pattern is much richer than seeing an oscillating pattern in single element. Consider a walking ring counter, often used to implement random numbers in software. A single integer is used to store the current state, and then a mathematical formula is used to compute the next value from the current one. These ring counters don't just count upward in binary, but instead follow a more interesting pattern, a subset of which has a more uniform distribution of numeric values. Getting back to the matrix, we could consider the subset of cells that light up at all in a dynamic pattern to just be bits in a large integer that implements a walking ring counter, but it's not quite that simple. As long as neighboring patterns don't touch, they don't affect each other. But if they do, then they merge, and a combined pattern emerges. It seems that the matrix has room for multiple dynamic patterns going at once. Further, the patterns' oscillation periods vary, from just one epoch (stable static) to dozens of epochs or more.

I started thinking-- there could be more representative states possible in a dynamic system than a static one. Consider that even with a 1000 x 1000 matrix (a million cells), each implementing a bit, the number of possible static states that can be represented is 21,000,000, a large number to be sure. But the number could be made much larger if we think of a dynamic state as one that consists of a sequence of static states in this system. The number of two-step sequences is (21,000,000)2. The number of n-step sequences is (21,000,000)n. In other words, if you're willing to have longer sequences used in dynamic representational states, the number of states gets very large indeed, and might just be large enough to represent the amount of data coming at a brain from the physical world, perhaps streamed at designated input cells in the matrix.

Dynamic States as Memory

This system still has no memory per se, because it does not in any way tuck away the experience in a permanent way, such as adjusting the communication between cells in response to having entered those states. In the ML world, we would think of training a neural network by adjusting the weights between connected nodes in order to minimize the error function, perhaps using backpropagation techniques. In neurobiology, we would think of this as synaptic plasticity being effected as afferent neurons signal efferent ones.

Adding static plasticity to this system could cause certain previously-encountered static states to be more easily encountered again. And, adding dynamic plasticity to the system could make it more likely that the system fell into a recognized pattern again. In other words, plasticity could be used to make the system model what it had seen, and play it back, even in related contexts, guided by the mathematical attractors produced by the link adjustments.

Dynamic States as Sensory Representation

Even more fascinating than a memory solution was the notion that these dynamic states resulted when a certain seeding of states across the matrix was made. Often, the matrix would settle on a dynamic oscillation that involved many moving parts, but basically quivered in a sequence of two, three, or even dozens of states, representing what it had been stimulated with. It's the oscillation that is the representation, not any of the static states of the matrix, although seed the matrix with any initial condition within that sequence, and it will continue the sequence. This dynamic state occupies space and time, and persists. It is the representation of the input.

Take the oscillating picture at the top of this article. It is an oscillator with a 48-step period overall, with two smaller side oscillators running at two and four steps, respectively. Note the dynamic period of the 48-step oscillation does not depend on the smaller oscillators even being present; they are all on the computation canvas at the same time, yet separate ideas, somehow. Perhaps they could be thought of as three independent inputs being represented in the matrix simultaneously, perhaps describing three parts of a visual scene, or an audio soundbyte and a visual element with a tactile sensation.

Like Machine Learning, But With a Twist

In ML, we already have the idea of spatial encoders and spatial poolers, which can process one or more input data streams and make them distributed over a large set of nodes, in a manner similar to how an algorithm like SHA-3 runs over its input to produce a bitstream that changes when the input changes. However, what these oscillating patterns in Conway's game tell us is that we are thinking in spatial dimensions, and we might benefit from adding a temporal dimension to how we think about how patterns are represented.

This was the inspiration for looking at spiking neural networks instead of ML's static ones. While we achieve strong measurable results by using static ML (just have a conversation with ChatGPT, for example, to see how effective static ML can be), it doesn't address real-time learning. Unlike ML, sentience, or the sensing and response to the environment, takes time. It's the time that is the good stuff; it's what makes it real, and relevant with respect to the real world. Imagine robots that actually learn how to use their limbs through experience, rather than be awkwardly limited to a set of gaits that only work in certain situations.

Scaling the Idea

As static ML began to take off and mature in the 2000s, I wondered if the hardware would be available to create a spiking neural network that could scale to the size of a natural brain. COTS hard drives were getting into the 1TB and 2TB sizes, and 64GB RAM was obtainable for most boards, though it was on the edge of being expensive. If a node were represented as a 32KB data structure in memory (with 8192 links to other nodes), then 64GB/32KB = 2MN (2 meganode) networks could be represented in memory, with 2TB/32KB = 16GN (16 giganode) networks represented on disk. If everything didn't need to be in RAM at once, or if it were distributed across multiple systems, it seemed possible to scale up the size required to simulate some of the smaller brains in nature. Well into the 2020s, 1TB RAM is possible, and 18TB hard drives are inexpensive, making it practical to experiment with networks that match the scale of living brains.

As I was thinking about scaling the network big enough to approach nature's brain sizes, it became apparent that even small creatures demonstrate remarkable behavior that allows them to live productive lives. These brains, including the brain of my favorite animal, Drosophila Melanogaster (the common fruit fly) are well within the reach of simulation today, at the 200KN scale.

Convinced that scaling was achievable, work commenced on turning little experiments into a production system capable of describing networks of neural networks, generating them into instances on multi-server simulators, and simulating them in real-time, with fidelity very close to the level at which we commonly understand neurons. With the release of our Sentience Engine V5.0, we optimized performance, and large scale networks have been described using our SOMA brain description language, generated on multiblade server farms, and simulated in real time.

NeuroSynthetica's Synthetic Sentience (SS) can be thought of as a dynamic form of ML; it employs a network of neural networks, some of which may have varying types and strengths of plasticity algorithms. It is trained by streaming spatially-encoded inputs into neural subnets, which communicate with other neural subnets having other inputs fed into them. Outputs are controlled by other neural subnets, which also feed back the signals they send to a robot body's actuators into other neural subnets. This feedback creates oscillating patterns on a large scale across the whole network. At the neural subnet level, plasticity dynamically adjusts link strengths and timing to create reactance to inputs; these subnets resonate when stimulated, and fall into patterns which represent what is occuring in the environment. The resonance is exhibited as dynamic patterns of data, echoing in dozens of networks simultaneously. The system doesn't work in an unclocked environment-- it is inherently driven by time, and uses patterns of neural activity to model the world.

What's Next?

Conway's Game of Life was the inspiration for an idea that helped to understand that dynamic representation will be an important part of real-time AGI (not the AGI of infinitely-wise chatbots and picture creators, but the AGI of robots that can navigate the ever-changing real world with a real-world physical body.) What's next is to create prototype robots that have the potential to exhibit behavior (such as coordinated limb movement during locomotion, or manipulation of the environment, or even learned utterance of mimicked (learned) language. Although all the Maker Faires and robotics clubs make this seem easy, in fact, it is the hard part. I spent the better part of 2022 working on a prototype robotic dog inspired by Boston Dynamics' Spot robot (a model of mechanical engineering excellence), only on a much more primitive scale, so that coordinated limb motion could be investigated. It is an extremely exciting time to be working in the fields of Computer Science, Robotics, and AI, and such a privilege to be able to work on something that touches all these areas.

Steve Jones

Founder & CTO

Explorations into synthetic sentience and building the robotics used to demonstrate it.

Contact Steve

Reach Steve on LinkedIn or via email.