Synthetic Brains are brains that are designed (by an engineer), rather than grown or expressed by nature. In theory, they can accomplish the same things, if the right Specification, Design, and implementation are chosen. Both natural and synthetic brains are capable of sensing the environment, and interacting with the environment, with the right I/O interfaces. Both are capable of producing behavior that is meaningful, relevant, and responsive to the environment, in a way that is not hardwired. Both are capable of being sentient.
The Neurogenesis Process
Natural brains start as a brain plan modeled by gene expression. During neurogenesis in humans, the neural plate is formed from the ectoderm (skin), and later morphs into the neural tube, from which emanate the forebrain, midbrain, and hindbrain, built up from neurons that actually travel along guides established by concentrations of chemical trophic factors.
Synthetic brains are embodied not by the physical world with chemistry and cells, but instead have a virtual embodiment. They start out as a brain plan modeled by a description of the brain plan written in a hardware definition modeling language such as NeuroSynthetica's SOMA™ used to describe synthetic brains. The text files containing the description written in the modeling language are compiled with the brain compiler, and produce code that is ultimately used by an artificial neurogenesis process that populates the simulation on the servers. When neurogenesis is complete, the servers are instructed to begin the simulation.
The Operational Mode
Natural brains are always running, as their constituent cells must continue living and performing their functions until they die. Biological processes have evolved to give the constantly-running brain a break, as we dream when we sleep.
Synthetic brains run when the servers selected to run the brain simulation run simulation software that in turn performs the simulation on the generated model, communicating with each other, and with the environment via I/O channels which ultimately reach another smaller computer over a network that performs the actual I/O.
In both cases, the brain's many neurons communicate with each other through signalling. When an upstream neuron (called an afferent) is excited, it exhibits an Action Potential, (we could say it "fires"), which causes it to release signals to the downstream neurons (called efferents) it is connected to. The efferent neurons have many kinds of receptors that are sensitive to specific signals, and when they encounter them, they perform an action at the efferent neuron, such as excite the neuron, or inhibit its excitement. Other actions, such as modulating plasticity, regulation, or even controlling arborization or apoptosis are also possible. If the efferents become excited enough to have an action potential themselves, then they go through the same process.
In natural brains, this process operates truly in parallel, and asynchronously. Pulses have an analog shape, but are clearly digital-- a neuron can emit pulses as rapidly as once per millisecond, but no faster. Nominally, most neurons are actually quiescent. The natural brain conserves energy by spiking as little as possible. There are exceptions-- some natural neurons have fixed pulse timing and some have variable rate pulse timing; others are analog and do not pulse.
In synthetic brains, the simulation process is divided into Epochs, the fundamental timebase of the simulation. For each epoch, the simulator processes all of the neurons that were excited in the previous epoch, delivering an action potential that potentially causes some or all of their efferent neurons to be scheduled for an action potential in the next epoch. This proceeds in lock-step fashion, across all servers in the simulation, so that real-time timing is achieved, and the simulation does not run faster or slower than physical time does.
The epoch period is selectable by the synthetic brain designer-- it might be 1ms to meet fast timing requirements of the real physical world, where the running synthetic brain must interact with a physical world governed by physical laws. For synthetic brains operating in slower contexts, perhaps with slower-moving physics or in virtual worlds on the internet, the epoch time can be increased to 10ms, 100ms, or even 1000ms, or anywhere in between. Increasing the epoch period allows the simulator time to perform more work in each epoch, so there is a trade-off between the volume of work being performed and the rate at which it is performed.
Sensory Inputs From the Environment
While they are awake, natural brains receive input from a wide range of senses, including many beyond sight, taste, smell, touch, and hearing-- proprioception (sensing the position of body parts), nociception (sensing pain), and equilibrioception (sensing the orientation of the body in the three dimensions), among others. In humans and many other animals, these sensory inputs are delivered to the brain via sensory nerves such as the optic nerve or afferents in the spinal column, and with the exception of olfaction, are wired to one region called the Thalamus which recodes this information and distributes it to the various areas of the brain such as neocortical areas that need those signals to perform their functions. The Thalamus contains subcomponents which process each sense modality in their own region. One such component is the Lateral Geniculate Nucleus (LGN), which processes information from the optic nerve, and subsequently passes this information to the first stage video processing area in the cortex, called V1, which in turn sends its information to V2, and so it goes, all the way to V5.
Natural brains are able to handle a wide range of sensory equipment beyond the sensory systems humans use. For example, fish have a lateral line, which senses electric fields emitted by other nearby fishes, and in this way, they can sense the presence of others even in light-deprived waters.
Synthetic brain models may be constructed that support any type of input that can be digitized-- still camera frames or live video input, auditory spectral gains produced from a microphone's waveform, physical orientation, pressure and tactile sensors, GPS position, ambient temperature and light, and in virtual worlds, things like the prices of securities and links from one website to the next. These sensory data are assigned input channels, and while originating from a computer system in a robot doing the data collection, are sent over a network (Bluetooth, Wi-Fi, or other means) to the servers running the simulation. When sensory data are received by the servers, the input neurons assigned to those data channels are stimulated as though they were stimulated by other neurons.
Synthetic brains need not be limited to interact with the physical world. Virtual sensory data, such as weather or seismic data, financial market data, Twitter, news feeds, or YouTube are examples of virtual environmental inputs.
Motor Outputs to the Environment
Many natural brains have a cortex that consists of different regions that send efferent signals destined to control the animal's physical body to intermediate gateway brain components, including the Basal Ganglia and Thalamus, and from the Thalamus, to the muscles that perform movement, and make sounds. Additionally, some cortical efferents stimulate regulatory systems that interface with the endocrine system, allowing cortical processing to affect heart rate, breathing, energy production, sleep, and overall readiness for action.
Synthetic brain models may also control output devices, in a manner similar to their input channel mechanism's operation. Output channels are assigned to specific simulated neurons, which when activated, cause the channels to be stimulated. The servers hosting those neurons deliver the resulting signals to the computer system attached to the other end of the channel via a network. This computer receives output notifications, and then performs activities such as activating servos and emitting sounds from speakers.
In nature, most animal behavior is performed by flexing muscles (including locomotion and vocalizations), with rare exceptions (i.e., the shocks emitted by an electric eel). In virtual worlds, behavior can include taking virtual actions with respect to perceived inputs. For example, a sentient webscape-based robot may have no actual physical embodiment, but instead may "move" from one website to the next, encountering other sentients along the way. Other more direct virtual behavior might be the purchase and sale of goods and services or securities, and alerting humans to dangerous conditions by electronic means.
The Synthetic Brain Development Process
The first task of the synthetic brain developer is to define the Requirements for a new synthetic brain; i.e., support a robot that will navigate the physical environment with motion using vision to see the environment as it moves. From the requirements, a Specification may be produced that describes what will be done to meet the requirements; i.e., create a real-time synthetic brain model that will accept a live video camera feed and pressure sensors on the bottom of its feet as input, and move servos that control legs attached to a small body containing the robot's computer system.
Once the specification is clear, a design for a synthetic brain can be formulated and described in a modeling language, perhaps written from scratch by including libraries containing building blocks such as primitive neural circuits from which neural fabrics are created, and pre-fabricated I/O channels designed to work with specific I/O devices like articulated legs and video cameras. The model may be componentized and separate components assigned to different engineering teams, which use their own simulators as virtual testbeds for their components.
With the model completely described, it is compiled, and then submitted to the simulation servers during a neurogenesis process.
Once the simulators have finished the neurogenesis process, they may be started together, along with the robot and its I/O interface computer, and the synthetic brain engineer begins debugging the system, using real-time monitoring and visualization tools to watch the I/O channels and the components of the simulated model operate. Corrections are made to the model as necessary until it has been properly tested and debugged.
Once a model has been debugged and is performing properly, it may be deployed. The simulation files may be collectively copied to digital media, and loaded onto headless embedded servers which perform autonomous simulation in the production environment.