Sensor Types of Synapse (my AI platform)

I am building an AI platform with 3 main components: Sensors (inputs), Goals and actions (output).

The initial version of this system is being built to try and make an automated vehicle AI for my racing game I made about 5 years ago. The original AI in that game was built with waypoints and had a lot of setup and tweaking required in order for the vehicles to make their way across the course, and did not have any “learning”, so it is an AI in only the videogame definition, not the machine learning definition.

I am planning on adding 4 different sensors to the vehicle.

  1. Location Sensor
    1. This reads the location on the course, like a GPS sensor
  2. Proximity Sensor
    1. This reads if something is within range of the vehicle
  3. Rank Sensor
    1. This reads the current race position (last place, 2nd place…)
  4. Speed Sensor
    1. This reads the speedometer

These are more than enough input to the system in order for it to optimize it’s actions and learn how to race along the course.

As I started coding these, I realized that I was going to encode the sensor readings to vector inputs in two distinct ways.

  1. Metered
  2. Discrete

Metered Sensor
The speed sensor is a good example of a metered sensor. In a metered sensor, the larger the reading, the more features of the input are activated. The slower, the less number of features of the sensor input are activated. If the maximum speed for a vehicle is 100mph, and I have 20 feature dimensions on my sensor, Then at 50mph, the sensor would have the first 10 features activated on the sensor reading.

MeteredSensor

Discrete Sensor
The location sensor is a good example of a discrete sensor. There should be no relationship between the location of the vehicle on the course and the number of features activated, so it would make little sense to activate more features if the vehicle is at the top of the course versus the bottom of the course. In this sensor type, each feature represents a specific dimension across the X, Y, Z dimension.

Discreet

Theoretically all sensors could be Metered, and the system could learn the discrete elements of the sensor reading. I am going to shortcut this but maybe this is worth testing after the system is up and running and I have the second phase of the system implemented, which will create more complex internal representations and “learn”.

I can’t think of any other sensor encoding types that one could build that would interpret raw sensor readings as vectors. So until the need is identified, these will be the base number sensors.

UPDATE: I did think of another way of coding a number. It could just be a single feature dimension. Pretty straightforward actually… I might have over complicated things.  🙂

Do you disagree? I hope you do because then one of us will learn something…

 

Advertisements

Definition and Elements of Synapse (my AI platform)

 

There is a really good definition of AI that has been put together by Techmergence.com.

“Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.”

https://www.techemergence.com/what-is-artificial-intelligence-an-informed-definition/

The reason I think it is so good, is that it has the three elements that I have identified in what I am building my AI platform and all other platforms, whether they are aware of it or not. Inputs (sensors), actions (output) and goals (objectives)- and what a system made up of these things does, is process them. This definition is the best one I have found, but if I were to criticize it, I think it is a bit too verbose. I would change it in this way.

“Artificial intelligence is an entity, able to receive and process inputs from the environment in order to flexibly act upon the environment in a way that will achieve goals over time.”

I believe that inputs, actions and goals are THINGS (objects) in an AI, and processing is the thing AI does. I think that this definition is a little bit more direct than the techmergence.com definition and if I were talking to someone writing an object oriented solution it would be a bit more specific for them.

It is important to differentiate in an AI system that the action in an AI is an object or THING, not a verb or actual action. An AI system only processes, and the outputs of the AI system are the actions that are represented in an object. This action if embodied would then be interpreted by a motor system.

An oversimplification of an AI workflow would be something like the one below.

SimplifiedWorkflow

The Actions piece of the this workflow would then be ingested by a motor system to execute in the environment, but the instruction for the action is the output of the AI system.

Do you disagree? I hope so, because then one of us is going to learn something…

Changing the format…

This WordPress was put together initially to promote and demonstrate the progress of a project I was working on with my brother and friend. That project is scrapped, but the name and the site are too good to just let sit.

I am going to start to post my work on AI – so instead of “Developments on Development” it’s going to be “Thoughts on Thoughts”.

There will be some well thought out articles, and then there will be random musings and thought experiments. I’ve been working on my AI project largely by myself at home and there is a big risk of getting tunnel vision, so I am going to take advantage of Cunningham’s law and start putting my wrong answers up here and hope someone corrects me.

Enjoy! Hopefully we both learn something in this process.

-Alex