Sensor Types of Synapse (my AI platform)

I am building an AI platform with 3 main components: Sensors (inputs), Goals and actions (output).

The initial version of this system is being built to try and make an automated vehicle AI for my racing game I made about 5 years ago. The original AI in that game was built with waypoints and had a lot of setup and tweaking required in order for the vehicles to make their way across the course, and did not have any “learning”, so it is an AI in only the videogame definition, not the machine learning definition.

I am planning on adding 4 different sensors to the vehicle.

  1. Location Sensor
    1. This reads the location on the course, like a GPS sensor
  2. Proximity Sensor
    1. This reads if something is within range of the vehicle
  3. Rank Sensor
    1. This reads the current race position (last place, 2nd place…)
  4. Speed Sensor
    1. This reads the speedometer

These are more than enough input to the system in order for it to optimize it’s actions and learn how to race along the course.

As I started coding these, I realized that I was going to encode the sensor readings to vector inputs in two distinct ways.

  1. Metered
  2. Discrete

Metered Sensor
The speed sensor is a good example of a metered sensor. In a metered sensor, the larger the reading, the more features of the input are activated. The slower, the less number of features of the sensor input are activated. If the maximum speed for a vehicle is 100mph, and I have 20 feature dimensions on my sensor, Then at 50mph, the sensor would have the first 10 features activated on the sensor reading.

MeteredSensor

Discrete Sensor
The location sensor is a good example of a discrete sensor. There should be no relationship between the location of the vehicle on the course and the number of features activated, so it would make little sense to activate more features if the vehicle is at the top of the course versus the bottom of the course. In this sensor type, each feature represents a specific dimension across the X, Y, Z dimension.

Discreet

Theoretically all sensors could be Metered, and the system could learn the discrete elements of the sensor reading. I am going to shortcut this but maybe this is worth testing after the system is up and running and I have the second phase of the system implemented, which will create more complex internal representations and “learn”.

I can’t think of any other sensor encoding types that one could build that would interpret raw sensor readings as vectors. So until the need is identified, these will be the base number sensors.

UPDATE: I did think of another way of coding a number. It could just be a single feature dimension. Pretty straightforward actually… I might have over complicated things.  🙂

Do you disagree? I hope you do because then one of us will learn something…

 

Advertisements