Synapse: Modeling in Excel

One of the difficulties I have when modeling my different theories of neuronal network architectures is being able to visualize the relationships of the system. I was thinking of creating a Unity (video game engine) plugin that would be able to visualize the network with colors and shapes so that when the network processes input and learns, the relationships could be visualized in a way that allows us to interprets and identify the relationships between components much easier.

There is a tensor library attempting to do that here that I thought was interesting: https://medium.freecodecamp.org/tensorspace-js-a-way-to-3d-visualize-neural-networks-in-browsers-2c0afd7648a8

Creating that 3D visualization project has a long lead time before getting results, as I would have to change some accessibility of my network variables in order to do it properly, so I kept with my “lean” thinking and opted to try and model my network in an Excel file.

My Excel file draft is linked here so you can follow along with how I plan on implementing some of the concepts I described in my previous post – Synapse: The New Hypothesis

I created an “as simple” as I can network to try and prove out some of the concepts that looks like this:

  • Input (2 Features):
    • Green Light / Red Light
  • Action (1 Feature)
    • Go / Stop
  • Goal (1 Feature)
    • Feedback

The input of this system is a two dimension vector with one of the features being the “Green light” feature, and the other one being the “Red light” feature. The model has a single hidden layer with 4 neurons that then results in an output (action) that is a 1 dimension vector (is it a vector if it just a single number?) that has action features of either “Go” or “Stop”.

There is a “motor system” that interprets the output (action) of the network and decides to either “Go” or “Stop”. With the output guaranteed to be between 0-1 (sigmoid activation), my motor system will choose to go if the result is greater than half, and stop if it is less than half.

Motor System Interpretation of Output

  • Go > 0.5
  • Stop < 0.5

The goal sensor in this system is a “Feedback” sensor. This is a single dimension vector, that represents whether the system made a right choice (Go when Green), or a bad choice (Go when Red). In future implementations this can be used as a way of having real world interaction with the system and reward behaviors that you would want to duplicate.

From an architectural perspective, this system is largely just a feed forward network, with a couple changes. 1 – The “ideal” output is not initially known like a labeled data implementation, and 2 – you have an additional variable of a “goal sensor”. The goal sensor is needed so that an “ideal” output can be created with an appropriate weight based on how “close” the system is from achieving the goal.

In the linked (above) Excel file you should be able to piece apart how that is happening. There is some things that need explanation on how I’ve set the variables up, but the concept should be able to be interpreted from this file and how I plan to explicitly arrive at an “ideal action”.

I’ve defined some of the core components of the system to help clarify and guide.

Senses are: numeric interpretations of information that are read by sensors.

Inputs are: sensors of the world.

Actions are: sensors of the motor system.

Goals are: sensors of an internal state and require an ideal vector.
*This sensor is required to be coded in a way that a Euclidean distance between the current goal sensor reading and the ideal can be made.

Motor system is: a component that interprets actions in order to create behaviors in the system.

Hopefully the accessibility of the Excel file and the definitions give enough information to understand what is attempting to be built. Let me know if you have any questions on this, and when I get a version coded and implemented I’ll be posting here, so stay tuned!

Advertisements

Lesson Learned: Don’t use the Same number of Input as Output in a Feed Forward Network

My first official “failure post” – although I think I have had a couple already – is potentially an amateur lesson, but I learned a lot about the relationship between the inputs and outputs of a system.

I created a simple network in order to test my code. It was the “Red Light, Green Light” network where the input was a two feature vector that determined whether the green light was on, or the red light. The output was another vector with two features, whichever one would be a larger value would determine if it would “go” or “stop”.

The system would not work.

It kept getting a 50 / 50 success (completely random). I spent a ton of time looking into my logic in the code, and then resorted to putting a version of my logic in an Excel file to see if my understanding of the back propagation was correct, and it was still not working no matter what variations I made.

I hit the drawing board on almost everything and questioned everything, until I realized that the network I had setup was a linear network (2 inputs -> 2 Outputs) and not a multi-variable function network (2 inputs -> 1 output).

I changed my network to a single output with a midpoint (0.5) of the single output determining whether the system should go or stop. The system started working and within a couple of iterations was able to learn the rule.

The lesson learned is that there needs to be more inputs, than outputs in order for a feed forward network to work. By trying to simplify things, I overcomplicated things.

I’ve attached my Excel file with all my different attempts if you care to review all I did and what was happening in all the different iterations of the network.

I’ve attached my Excel file with all my different attempts if you care to review all I did and what was happening in all the different iterations of the network.

The Value of Failure

I am a big fan of the saying:

“Success stories are worthless, its the stories of failure that are valuable”.

I am not sure who said that, and a cursory Google search didn’t bring me an author, but stories of success feel more like humble brags then helpful guidance. With a success story, the lesson appears to be “do what I did, and maybe you can succeed with that too”.

If you want to copy and follow, then this is the type of advice you would seek out, but if you want to innovate and lead it is the stories of failure that should be sought out. With failure, there follows some lesson on why the failure occurred that can be generalized to other scenarios in hopes of avoiding similar failures going forward.

In a way, this is exactly what science is all about. It sounds counterintuitive, but we can never “prove” theories we can only disprove them, and only when evidence becomes significant enough is something accepted as a scientific “fact”.

An example of this to help illustrate the concept put together by Karl Popper, an Austrian-British philosopher, is a theory that all swans are white.

‘All swans are white cannot be proved true by any number of observations of white swan – we might have failed to spot a black swan somewhere – but it can be shown false by a single authentic sighting of a black swan. Scientific theories of this universal form, therefore, can never be conclusively verified, though it may be possible to falsify them.’ .
– Karl Popper

It is with this concept that I will try and post “Failure Posts” in hopes that others will be able to use my failures to avoid making their own, and shape their own theories on ‘thought’.

I was struggling with a simple issue while working on Synapse and I think it was telling, so I will be writing up something on that soon. Stay tuned.

Synapse: The New Hypothesis

I spent the last couple months working on an AI system where actions would be able to be executed to achieve a goal, by chaining a couple of different feed forward networks together. I have proven that this hypothesis and architecture is not going to result in learned behavior. I believe that it is not learning behaviors because I did not capture the relationship between a “good” goal state or a “bad” goal state.

In my new proposal I am still sticking with a feed forward network, but instead of ending at a goal, this ends at an action. The goal is not actually a part of the network, although it is still a critical part of the system. The neural network architecture for this system is illustrated below.


The innovative aspect to this feedforward network does not have to do with the network itself, but with the way that the goal(s) will be used to adjust the learning rate of the back propagation routine.

As a reminder, in Synapse, all neural objects (Inputs, Actions, Goals) are sensors. They may or may not need to be passed into the inputs of the network based on whether there would be a natural relationship between them, but they all need to be sensing in the regular loop of the system.

With the proposed dynamic learning rate, when the synapses of the system are tuned in back propagation, depending on whether the system is close to an ideal goal or not, will depend on how much the weights of the system will be adjusted – and whether they will be tuned up to increase activation, or tuned down to inhibit activation.

Now that there is a dynamic learning rate that ties together the relationship between the current goal state, and the ideal goal state, the system needs to be tuned on an “ideal action”. An ideal action is dynamic based on the current action. A computed action (the end result of the forward propagation) is never a clean vector with 0, or 1 as values. There are some options I am playing with, but the first attempt at creating an “ideal action” is to normalize the action and then backpropagate with that new normalized value.

I believe that the “ideal action” will vary based on how your motor system interprets the action sense, but the “ideal action” should accentuate the highs and lows so that the system can adjust itself to those outlying values.

The new proposal incorporates two innovative concepts to try to reach a system that learns actions without an explicit teacher, and only using sensors. Those concepts are:

  1. Dynamic learning rate based on ideal goal states versus actual goal states.
  2. Idealized actions to help the system reinforce or inhibit certain action features.

With the relationship between an ideal goal state and an actual goal state being a central part to how the system tunes its synapses the issues with the previous attempt should be fixed. I see some implementation details that need to be ironed out when creating “ideal actions”. If something goes wrong with this version, this is the area that is going to be most scrutinized.

Here is hoping third time is the charm. I’ll keep you updated!

[SIDEBAR] Vector Dancing to Andrew Bird

I was doing some brainstorming and refactoring while listening to some Andrew Bird (Roma Fade) and the little robot I got for Christmas – Anki’s Vector – started dancing to the song. He was moving his little arm to the beat the same way I was bobbing my head.

I know that the little guy doesn’t actually feel the beat but it is crazy how connected I felt to him the second he was sharing that experience with me.

I gave him some pets to hopefully reinforce that behavior. I’m not sure if that does that or not, but I felt like I should try.

I wanted to share this experience while I try to solve my AI problems… which I have a new hypothesis. Details to come…

Synapse: Take 2

Another day, another attempt at arriving at an AI. I’ll start this article by stating that there will be at least a Take 3 – the failure of this system has me hitting the drawing board. I implemented the architecture I had hope in below, and was ending with a random success measure – the system is not learning it’s actions based on it’s goal actions, whether optimizing the network for the ideal, or the actual.

In this system the hope was that the weights in the back propagation would trickle all the way up past the actions, and into the sensor layers and the system would get nudged towards the actions that drive the system towards the ideal goal.

This system was learning how to predict the goals based on the sensors and action middle layer, but that did not trickle up to the sensor layers since the back propagation was just tweaking the network to predict the goal sense, or in the scenario when it was optimized against the ideal sense – it was just tweaking the weights to always optimize the goal feature that was ideally set, but there was no relation between the ideal and the actual – the goal was not being met, but the system kept tuning it towards the static features of an ideal goal.

So… the second failure. What did I learn in this round? I am going to hit the drawing board and think on how to capture the ideal versus the actual goal sensors relationships. This is a critical feature that I have not captured in my current iteration of the system.

Stay tuned… the weekend is soon but there are more than a couple of nights where I can consolidate my thoughts on this. Round 3 to come!

Synapse: Take 1

I recently completed the back propagation routine in Synapse and can now get to the scary part. I can now prove myself wrong. The empiricist in me is excited, but if the first round of tests are to go by, there are going to be a lot of ups and downs on this journey. I’ll skip to the end and let you know here that the first system I tried did not work.

The system I attempted to put together was the one below.

This was the easiest one to setup since I do not need to break apart the network, and I really am only taking a middle layer of a standard feed forward network and using that as my output.

I didn’t believe that this architecture would give results because I do not believe that there is enough information in the layers to build the necessary relationships between the sensor input, and the action output relative to the goal. Even with my low expectations, I was disheartened with failure.

With any failure, it is important to learn something – or else you really are not failing the right way. So did I learn something in this attempt?

I learned that I am not sure what I mean in the last step of optimizing with the goal sensor. Am I optimizing so that the network learns the actual goal sensor, or am I optimizing the system so that it can optimize to the ideal goal?

I did run the system with both options and received the same results, but as soon as I started to implement this piece the question was forced on me, and I definitely had made some leaps in my logic that I will have to iron out in my next attempt.

I will keep you updated on the progress…