Adding Supervised Learning to Synapse

I started my AI platform (Synapse) with the understanding that I wanted to make an AI that paralleled some human constructs, as the field of AI has too many of it’s own terms that makes learning AI more complicated than it needs to be.

This meant that Synapse would be an unsupervised system. The difference between a supervised, and unsupervised system is labeled data, versus unstructured data. An example of labeled data for supervised learning would be CAPTCHA tests you have to pass to make sure you are not a robot when logging into your favorite service. By selecting the images that all have bikes in them, you are helping label image data. Without the label, the system can not learn what a bike is. Labeled data provides you the correct answer, while unsupervised learning doesn’t have any right or wrong – it just learns the features of the data being processed. [What’s the difference between supervised and unsupervised?]

I made a definition for AI only slightly modified from a techmergence.com definition, and made a statement that all AI’s could be defined with it – so does that definition apply to unsupervised AI alone, or supervised as well?

“Artificial intelligence is an entity, able to receive and process inputs from the environment in order to flexibly act upon the environment in a way that will achieve goals over time.”

With this definition it is important to identify the new environment (MNIST data) and determine what each of the pieces are.

  • Environment:
    • MNIST Dataset
  • Inputs:
    • 28×28 pixel images
  • Actions:
    • Print a Number (1, 2, 3…9)
  • Goals:
    • Accurately Interpret the Numbers

The trick for a supervised system in Synapse, is that in the unsupervised implementation, the goal is usually an ideal sensor state. For example; in my Super Truckin’ AI, the vehicle’s speed has a sensor, and there isa goal represented in the system of maximum speed. The system (theoretically at this point) would identify the relation between hitting the gas, and getting closer to the goal of top speed, and learn to act by hitting the gas.

In a supervised system, if I were to provide the actual number as the input goal in addition to the pixels of the image of the number, then it would learn to ignore image pixels, and would just repeat the goal as the number, which is essentially useless. That setup is like taking a Jeopardy test where you need to answer with a question when you are given the question already.

Since the goal is explicit in a supervised system, the system needs to optimize the output (action) with a dynamic goal (each image is a different number), and not a static goal (go fast), since the goal is different and explicit for each set of numbers.

This implementation of MNIST is not bringing anything new to the world of AI, but I plan on using the MNIST dataset as a test of my neural network. I’ll start with version 1 of the network using some basic parameters and as the system evolves, I can use this data as a benchmark of progress.

I’ll let you know after I implement the “pivot” in the system and add supervised learning  if I have to revise the AI definition or not, but I think I have re-framed the problem in a way that solves it for a supervised implementation even if it breaks some of my architectural constructs.

Do you disagree? I hope so, because then one of us is going to learn something…

Advertisements

Pivot… Hello World Synapse, with MNIST

I started working on Synapse (AI platform) with the goal of creating an AI that would drive the vehicles in my racing game Super Truckin’. Unfortunately, after attempting to get Super Truckin’ up and running I have determined I will have to “pivot”.

When I updated Super Truckin’ to the new Unity 3d build, I was forced to update the Edy’s Vehicle Physics engine and broke everything. There was no “automatic update” of the changes and there was significant architectural changes in the physics.

Another challenge faced was that the libraries I was going to use for the feed forward and back propagation methods were in Encog (C#) and that library is completely incompatible with Unity, as Unity only supports .NET 2.0 and Encog is .NET 3.6.[Encog]

This doesn’t change my goal as I was still planning on making my own feed forward , and back propagation methods, but it does put it off for a while until I can build the neural network pieces of my platform.

With these bumps in the road, and my patience to see some results short, I have now chosen to make my first complete project with Synapse one that is based on the MNIST data set. A “hello world” set of data for AI developers. For those not familiar, MNIST is a set of hand drawn numbers (28×28 pixels). [MNIST Data]

This does change my platforms first implementation from an unsupervised network, to a supervised network since MNIST is labeled, meaning the system isn’t going to “learn” intuitively what the number is, it will have to be told after each attempt whether the number the system believes it is, is the correct number. This is a drastic change, but one that does test whether my original definition of an AI is still valid, or if it was only valid for unsupervised AI’s.[Original AI Definition]

Is it going to be easy to switch a system from unsupervised to supervised learning? We will find out…

 

Sensor Types of Synapse (my AI platform)

I am building an AI platform with 3 main components: Sensors (inputs), Goals and actions (output).

The initial version of this system is being built to try and make an automated vehicle AI for my racing game I made about 5 years ago. The original AI in that game was built with waypoints and had a lot of setup and tweaking required in order for the vehicles to make their way across the course, and did not have any “learning”, so it is an AI in only the videogame definition, not the machine learning definition.

I am planning on adding 4 different sensors to the vehicle.

  1. Location Sensor
    1. This reads the location on the course, like a GPS sensor
  2. Proximity Sensor
    1. This reads if something is within range of the vehicle
  3. Rank Sensor
    1. This reads the current race position (last place, 2nd place…)
  4. Speed Sensor
    1. This reads the speedometer

These are more than enough input to the system in order for it to optimize it’s actions and learn how to race along the course.

As I started coding these, I realized that I was going to encode the sensor readings to vector inputs in two distinct ways.

  1. Metered
  2. Discrete

Metered Sensor
The speed sensor is a good example of a metered sensor. In a metered sensor, the larger the reading, the more features of the input are activated. The slower, the less number of features of the sensor input are activated. If the maximum speed for a vehicle is 100mph, and I have 20 feature dimensions on my sensor, Then at 50mph, the sensor would have the first 10 features activated on the sensor reading.

MeteredSensor

Discrete Sensor
The location sensor is a good example of a discrete sensor. There should be no relationship between the location of the vehicle on the course and the number of features activated, so it would make little sense to activate more features if the vehicle is at the top of the course versus the bottom of the course. In this sensor type, each feature represents a specific dimension across the X, Y, Z dimension.

Discreet

Theoretically all sensors could be Metered, and the system could learn the discrete elements of the sensor reading. I am going to shortcut this but maybe this is worth testing after the system is up and running and I have the second phase of the system implemented, which will create more complex internal representations and “learn”.

I can’t think of any other sensor encoding types that one could build that would interpret raw sensor readings as vectors. So until the need is identified, these will be the base number sensors.

UPDATE: I did think of another way of coding a number. It could just be a single feature dimension. Pretty straightforward actually… I might have over complicated things.  🙂

Do you disagree? I hope you do because then one of us will learn something…

 

Definition and Elements of Synapse (my AI platform)

 

There is a really good definition of AI that has been put together by Techmergence.com.

“Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.”

https://www.techemergence.com/what-is-artificial-intelligence-an-informed-definition/

The reason I think it is so good, is that it has the three elements that I have identified in what I am building my AI platform and all other platforms, whether they are aware of it or not. Inputs (sensors), actions (output) and goals (objectives)- and what a system made up of these things does, is process them. This definition is the best one I have found, but if I were to criticize it, I think it is a bit too verbose. I would change it in this way.

“Artificial intelligence is an entity, able to receive and process inputs from the environment in order to flexibly act upon the environment in a way that will achieve goals over time.”

I believe that inputs, actions and goals are THINGS (objects) in an AI, and processing is the thing AI does. I think that this definition is a little bit more direct than the techmergence.com definition and if I were talking to someone writing an object oriented solution it would be a bit more specific for them.

It is important to differentiate in an AI system that the action in an AI is an object or THING, not a verb or actual action. An AI system only processes, and the outputs of the AI system are the actions that are represented in an object. This action if embodied would then be interpreted by a motor system.

An oversimplification of an AI workflow would be something like the one below.

SimplifiedWorkflow

The Actions piece of the this workflow would then be ingested by a motor system to execute in the environment, but the instruction for the action is the output of the AI system.

Do you disagree? I hope so, because then one of us is going to learn something…

Changing the format…

This WordPress was put together initially to promote and demonstrate the progress of a project I was working on with my brother and friend. That project is scrapped, but the name and the site are too good to just let sit.

I am going to start to post my work on AI – so instead of “Developments on Development” it’s going to be “Thoughts on Thoughts”.

There will be some well thought out articles, and then there will be random musings and thought experiments. I’ve been working on my AI project largely by myself at home and there is a big risk of getting tunnel vision, so I am going to take advantage of Cunningham’s law and start putting my wrong answers up here and hope someone corrects me.

Enjoy! Hopefully we both learn something in this process.

-Alex

IndieCade Feedback on Demo

We submitted a demo of the game to IndieCade for an upcoming IndieCade Festival and unfortunately didn’t make it. 😦

All is not lost though! We received some really great feedback and have since released a new demo taking the feedback into consideration. We thought we would share the feedback in it’s raw form and then what we did to try and respond to that feedback.

Feedback 1
The concept is solid as a multiplayer game, and a stands out a little (while there are plenty of chaotically fun local mutilplayer game, car-physics soccer is a least a little different, and cool). However the controls, specifically the car physics, while intended to be awkward, are effectively unusable w/intention – they accelerate so slowly, have such a high speed, and slow turns speed, but also lose control at the slightest collision, so scoring feels random because it often occurs by accident and not through player directed action. Making the game feel more like work, trying to swim upstream to control your car. It also feels like perhaps some deviation from basic soccer rules could be used in combination with the awkward physics to improve that, but other than the power ups that is an under explored area. Having a large number of local players through methods like using smartphones is a great idea, although the visuals also make it very very hard to distinguish your car (even with the show car marker functionality, which didn’t seem to work consistently).

Feedback 2
Super Truckin’ is a fun idea but I think it needs a little more work to get it into a more solid space. The controls feel a little too loose, with vehicles a little slow to respond, they feel sluggish and heavy with very wide turns. A reverse function would be very nice. The AI when we played two players with two AIs seems to be scoring against their own teams.  The music is fun but gets pretty repetitive. With sharper controls I think this game could be a lot of fun, it would feel like you had more direct control and thus could actually impact what was happening, as it is it feels like it’s more luck than skill if the ball goes into the goal.

Feedback 3
The core concept of the game is solid if slightly straightforward but there were problems in the execution. We were told that it could use mobile devices as an input but this didn’t add much as we chose to play on controllers for better response time and accuracy. My biggest problem in game was how difficult it was for players to exert any kind of meaningful agency on the game especially in matches with larger team sizes. The players car was very difficult to control and often overshot its target or was hit by an unavoidable collision. I think you could learn a lot from Fifa and the upcoming rocket league in terms of adding responsiveness and exciting meaningful options to the player’s controls.

The items we decided to tackle in this new release are below. We do plan on tackling more items in the next demo but these were the ones we felt were common amongst the three and were quickest to handle.

Items Addressed

  1. Improved speed and maneuverability
    1. This was something we had grown accustom to as we played the game, but as soon as we got the player feedback we went back and revisited the fundamentals and new we had to address it. The cars now accelerate much ffaster and turn much quicker. It may not be perfect yet but it is much better. Check it out and let us know what other tweaks you might want.
  2. More Predictable Ball Handling
    1. This was already in progress by the time we received the IndieCade feedback but it was good to know we were working on something that others noticed. Previously the cars had a top and bottom collider, but after playing Rocket League and realizing that the ball didn’t always collide directly with the car – we decided to switch our colliders from two box colliders to a capsule collider. This little change did the trick and the ball now gets hit and goes in the direction you were trying. Much more rewarding experience.
We are open to suggestions and feedback with this new release but we plan on addressing a couple more items still. We will continue to focus on the fundamental car handling and making sure that is on target and we will be focusing on getting more polished content.
Try the demo by downloading it here: http://in8b.it/
If you want to contact us directly check out our Facebook, Twitter or other contact methods like old fashioned email!
Thanks!
In8b.it

A little bit of a set back…

So after a relatively disappointing start to our Steam Greenlight we were ready to start blogging, tweeting and demoing our game everywhere when I was rear ended by an Escalade at a stop light from the exit of a freeway.

That event put a stop to just about anything as I was helicoptered from the accident and spent over a week in ICU – a near death experience for sure. The team here at In8B.it (my brother Konstantine, and my friend Jason) was awesome during this time and showed a ton of support that has helped tremendous in my recovery full well knowing that any of our plans for Super Truckin’ had to be put on hold.

It’s been a two months to the day since to accident and I feel like now is the time to try and get back to the plans for Super Truckin’ but I wanted to make sure our fans knew why we have been silent for so long and that we are still committed to delivering on Super Truckin’. It’s a fun game we like playing and we think others will too, even if our interest from Greenlight has been luke warm. We think it may not be the target demo for a game that doesn’t have killing, zombies, or some other new standard to modern gaming in it and are starting to consider what our next options are. If you are a fan and have a preferred console hit us up and let us know, we will listen!

Just so you know I’m not BS’ing and that this is the truth to our silence, here is the photo of my car after being totaled. Not sure if it is ironic or not that I’m working on a game that is super fun to crash into others with or not, but maybe the PTSD from driving can be lessened with my experience playing Super Truckin’ – that would be a twist, haha.

crashNoLicense