Another day, another attempt at arriving at an AI. I’ll start this article by stating that there will be at least a Take 3 – the failure of this system has me hitting the drawing board. I implemented the architecture I had hope in below, and was ending with a random success measure – the system is not learning it’s actions based on it’s goal actions, whether optimizing the network for the ideal, or the actual.
In this system the hope was that the weights in the back propagation would trickle all the way up past the actions, and into the sensor layers and the system would get nudged towards the actions that drive the system towards the ideal goal.
This system was learning how to predict the goals based on the sensors and action middle layer, but that did not trickle up to the sensor layers since the back propagation was just tweaking the network to predict the goal sense, or in the scenario when it was optimized against the ideal sense – it was just tweaking the weights to always optimize the goal feature that was ideally set, but there was no relation between the ideal and the actual – the goal was not being met, but the system kept tuning it towards the static features of an ideal goal.
So… the second failure. What did I learn in this round? I am going to hit the drawing board and think on how to capture the ideal versus the actual goal sensors relationships. This is a critical feature that I have not captured in my current iteration of the system.
Stay tuned… the weekend is soon but there are more than a couple of nights where I can consolidate my thoughts on this. Round 3 to come!