How Synthetic Sentience Works

Synthetic Sentience is a new approach to AI. It's not about knowledge engineering, or the deep learning or ML techniques used to create the highly functional yet limited forms of AI we have today. It doesn't chat or produce pictures; instead, it provides autonomy for physical robots or virtual agents by modeling real-world experiences and modifying its behavior accordingly. SS leverages digital models of biologically-inspired brains synthesized in a laboratory, that run on real-time simulation servers, connected to physical robotics equipment or virtual agents.

Synthetic Brain Models

Synthetic brain models are designed using NeuroSynthetica's Workbench CAD software and its SOMA modeling language. Using these design tools, both the large scale structure and the small-scale computing fabric of a spiking neural network are defined, as well as I/O channels to robots.

Generating Simulations from Models

Once a model is defined, a simulation incorporating all of the model's elements is generated on a COTS server running NeuroSynthetica's Sentience Engine real-time simulation software, saved to disk for use when the server runs the simulation. This step, called neurogenesis, actually grows a synthetic brain on the server in a database that incorporates the model's complete structure, representing every single neuron and all its connections to other neurons, as well as sensory input nerve channels from the robot, and motor control channels to the robot.

Running Simulations on the Server

Once a simulation is generated, the entire set of neurons and communication between them, together with the input and output data streams associated with a connected robot, is simulated in scalable time slices from 1ms to 1000ms, to match the time scale of the robot environment. During each timeslice, neurons may become excited and spike, exciting or damping other neurons. Some neurons may receive stimulation from sensory input channels from the robot's environment. Other neurons may send stimulation via output channels to the robot's control systems, driving actuators or other behavior.

Robots and Virtual Agents

As the robot's application program runs, it calls API functions provided by the NeuroSynthetica Robot Library to establish a connection over a TCP/IP network (including Wifi and broadband wireless networks). This allows the robot to send sensory data to the server, and to receive control information from the server that can be used to move actuators or perform virtual functions.

Because robots simply run an application that calls the NeuroSynthetica Robot APIs, they need not operate in the physical world, but instead may operate as an application on a server or end system that performs tasks in cyberspace. These applications, called virtual agents, have input data available to them, such as financial market, news and weather data, or proprietary real-time company data, and may take actions that are cyberspace-oriented.