One of the difficulties I have when modeling my different theories of neuronal network architectures is being able to visualize the relationships of the system. I was thinking of creating a Unity (video game engine) plugin that would be able to visualize the network with colors and shapes so that when the network processes input and learns, the relationships could be visualized in a way that allows us to interprets and identify the relationships between components much easier.
There is a tensor library attempting to do that here that I thought was interesting: https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8
Creating that 3D visualization project has a long lead time before getting results, as I would have to change some accessibility of my network variables in order to do it properly, so I kept with my “lean” thinking and opted to try and model my network in an Excel file.
My Excel file draft is linked here so you can follow along with how I plan on implementing some of the concepts I described in my previous post – Synapse: The New Hypothesis
I created an “as simple” as I can network to try and prove out some of the concepts that looks like this:
- Input (2 Features):
- Green Light / Red Light
- Action (1 Feature)
- Go / Stop
- Goal (1 Feature)
The input of this system is a two dimension vector with one of the features being the “Green light” feature, and the other one being the “Red light” feature. The model has a single hidden layer with 4 neurons that then results in an output (action) that is a 1 dimension vector (is it a vector if it just a single number?) that has action features of either “Go” or “Stop”.
There is a “motor system” that interprets the output (action) of the network and decides to either “Go” or “Stop”. With the output guaranteed to be between 0-1 (sigmoid activation), my motor system will choose to go if the result is greater than half, and stop if it is less than half.
Motor System Interpretation of Output
- Go > 0.5
- Stop < 0.5
The goal sensor in this system is a “Feedback” sensor. This is a single dimension vector, that represents whether the system made a right choice (Go when Green), or a bad choice (Go when Red). In future implementations this can be used as a way of having real world interaction with the system and reward behaviors that you would want to duplicate.
From an architectural perspective, this system is largely just a feed forward network, with a couple changes. 1 – The “ideal” output is not initially known like a labeled data implementation, and 2 – you have an additional variable of a “goal sensor”. The goal sensor is needed so that an “ideal” output can be created with an appropriate weight based on how “close” the system is from achieving the goal.
In the linked (above) Excel file you should be able to piece apart how that is happening. There is some things that need explanation on how I’ve set the variables up, but the concept should be able to be interpreted from this file and how I plan to explicitly arrive at an “ideal action”.
I’ve defined some of the core components of the system to help clarify and guide.
Senses are: numeric interpretations of information that are read by sensors.
Inputs are: sensors of the world.
Actions are: sensors of the motor system.
Goals are: sensors of an internal state and require an ideal vector.
*This sensor is required to be coded in a way that a Euclidean distance between the current goal sensor reading and the ideal can be made.
Motor system is: a component that interprets actions in order to create behaviors in the system.
Hopefully the accessibility of the Excel file and the definitions give enough information to understand what is attempting to be built. Let me know if you have any questions on this, and when I get a version coded and implemented I’ll be posting here, so stay tuned!