So I am looking to implement my own pattern of AI, I am calling Synapse.
With that audacious goal, I figured I would take some time and spell out how I see these existing patterns and systems and the different implementations versus the planned implementation of Synapse.
Supervised learning is the most straightforward and common of the implementations of “AI”. I am not sure of it’s history and who had the first version of this (if you know, please let me know!) form of machine learning/AI but it is very common.
This AI pattern requires “labeled data” or the answer key to what is represented in the distributed processing network (or neural net, or whatever you want to call it). This method learns how to process the image/sound/data through training sessions and after it has trained enough it can start processing new data. With newer Supervised learning models they have been able to train the system to perform better than human at tasks like categorization (hot dog / not hot dog) or prediction. Even humans are not perfect at these tasks, so we are now able to train these networks to be better than the average human (scary?).
Unsupervised learning is related to Supervised Learning as the system is trained on a network of neurons, with the difference of functionality being that the this type of a system do not need “labeled data” to learn the relationships between the data, it just learns the relationships between data. What this means is that it can group the data into “like” items categorizing them.
The types of things that an unsupervised system is generally used for is categorization or “clustering”. It can be used to answer the hot dog /not hot dog problem without data that is specifically labeled as hot dogs or not. The system learns the patterns in the images of food and then uses those patterns to group the images.
I have not found an implementation of a business solution that is 100% unsupervised. It seems like this system is mostly used for exploring data and finding patterns that weren’t directly evident or easily described. After identifying these features, they can be used in conjunction with a supervised learning system.
I am relatively new to my understanding of Reinforcement learning and when I was reading about it I got really excited because it seemed to marry up a lot with the thoughts I had about an AI system. It had actions, and goals and separation of environment.
I was quickly let down.
The terms that are used were on point and I am using a lot of the same terms in my system (action, reward(goal), environment, but the amount of structure that is required to be setup for this system is significant.
In a reinforcement learning system, you have to map out the environment and outline the rules of interaction within it. With those rules established, the system uses a bit of a brute force approach to solving what action an “agent” should take to maximize the rewards in the environment. There is a lot of setup in this system, with configurable constants required to determine how much future rewards should be valued over current rewards. (A bird in hand is worth two in the bush.)
My big problem with these…
The biggest problem I have with these systems is the amount of setup and configuration they all require. When I think of an AGI system, I imagine starting with the smallest elements of input and with a properly structured system, the items that are configurable in these systems would be emerging features of the structure of the systems.
An example of this would be the brains working memory. We know that working memory is 7 bits plus or minus 2 (5-9). There is no evidence that there is something specific in the brain with 7 spots open in it for the 7 things you can keep in mind at time, and what it most likely is, is a side effect of the way the brain processes data and not a specific “thing” in the brain.
Synapse will be different why?
With Synapse, I am taking an approach of defining the things that I believe exist in the brain, and have some of the features of some systems become emergent properties of the system.
Synapse has neurons, connections between them architected in a way to fulfill the goal. The trick is structuring or architecting the same set of objects that have been in use in AI systems for years, since those are all the things that you can look at a brain directly and see. There is no ambiguous concept in the structure of the system. As things start coming together, I am hoping to see that those amorphous concepts start to be represented in the processing of the system, and not as a specific configuration of the system.
I have a lot of ambition for this project and am using these posts to think through the things about existing systems that I find useful, as well as things that I see as shortcuts in the process of arriving at an intelligent digital “agent”. If you think the same way as me, or not, post a comment and we can have a conversation.