I was on my way to Northwest Arkansas for work and had some time on the plane so I started working through some of the struggles I’ve had regarding making an AI platform. Mainly the struggles in not breaking the definition that I put together originally. I’ve had a couple of quick pivots recently, and that started to get me thinking.
With a couple of hours to burn, I had the time to focus and I had a pretty big breakthrough on some of the architectural details in the neural network implementation of the system. It’s a network setup that allows me to keep my system general without abandoning the original definition. Before this breakthrough, one of the strategies that I was toying with in my search for answers, was to “reframe” the requirement or in this case redefine AGI, in order to make something “work”.
I put that in quotes because it is important to realize that by making it “work” I was actually admitting failure to the definition. I was being pressured into solving a problem that there was no clear answer to. Not proud of this, but when the going got tough, I was looking for a solution that was “good enough” and got me “pretty close” to a system that functioned the way I envisioned and originally defined. In many cases “good enough” works good enough in a business scenario. In these situation we do ourselves a disservice by not fully respecting the requirement and honoring the original vision and pushing ourselves to the best solution.
If we treat requirements as pass or fail and not a “close enough” compromise on them, it allows us to push ourselves past struggles and arrive at innovative solutions. If we accept a “good enough” solution we are giving ourselves the easy way out and not fully testing our abilities.
In a scenario where you believe that you have reviewed all possible options and can not fulfill the requirements, pause for a minute and back away from the problem to try and get a new point of view. Review whether the requirements are impossible by definition (a circle can not be a square) – or if there are is a missing piece of the puzzle that is still to be identified.
I experienced this recently at my day job where a developer was struggling with an equation for handling feed consumption and deliveries for chickens, while at the same time handling logistical needs of the trucks needed to deliver the feed. A lot of variables to try and juggle at once.
The requirement was to allow an operator to set the delivery date manually, instead of the automated delivery dates that the system is currently producing. This is tricky because there are a lot of variables that need to be calculated to make sure that the birds do not starve. If a delivery is needed prior to the set delivery day because the birds are eating much more that anticipated the system needs to accommodate that.
One of the expectations of this system is that after setting a date – and not changing any other variables – the number of deliveries and the amounts to be delivered would not change over the life of the flock. The developer was struggling, because these values were changing – not significantly – but enough to be noticeable.
This developer was frustrated and trying many different complicated ways of try to add things up properly, and was convinced that there was no way that the numbers would ever be able to stay the same because there was too many variables. To me, no matter the number of variables that are involved, the only variable that changed was setting the automated date to manual – so effectively this was a single variable change. Something was off, and the implementation was wrong.
After focused thought and a review of the potential solutions the developer was working on, he and I arrived at a change in a smallpiece of the code – one line being moved out of a loop and into another – and was able to fix dates for feed deliveries without changing any of the other fields.
This chicken story has parallels in the world of AI. We are still compromising on the solutions we are making because we are not having the right conversation and narrowing down what are valid variables and which variables are not variables at all, and we have only added them because of the compromise on the definition. With the three major types of AI architectures out there: Supervised, Unsupervised and Reinforcement learning, compromises are clear. To be fair, many of these systems were not and are not designed to be AGI’s, but with some discussing AlphaGo (an advanced reinforcement network) as the next coming of an AGI system, I have to put a critical eye to it.
- Compromise on Data
- The answer has to be given, which results in a chicken and egg scenario. If all knowledge needs a supervisor, then who was the original supervisor?
- Compromise on Goals
- This system lacks a definition for a flexible set of goals, it’s architecture is designed for classification and does not generalize well to many other tasks.
- Compromise of Environment
- This system requires manual creation of the rules of the environment. AlphaGo handles a VERY complex system, but it is still nothing close to as complex as the real world.
I will continue to focus on the definition of AI and not add manual interventions in the setup of the environment, the testing of whether the system is providing a correct answer, or a failure to broaden it’s generality to other actions. These challenges will need to be addressed in order for AGI to be realized.
This, less is more approach to AGI your work will be better served in limiting it’s variables. Parsimony is critical. The simplest solution is always the most right solution. If you are working on a system where you are creating more setup and configuration you are starting to go down the wrong road. To put it brashly, an AGI should be “born” and then it should evolve from there.
Thoughts? Have you ever had this type of a scenario where you just put the headphones on and focus intently for a couple of hours walking away with a completely different point of view on a problem?