Friday, September 23, 2011

The New Neural Network.

After long deliberation, I have decided to revamp the AI's neural network implementation. There are a few main reasons:

  • Our "neural network" was only really one by namesake. It was a rigid model that externally behaved like a neural network but had virtually nothing else in common.
  • The Neural Network, Policies and Memories were getting a lot more complex than required and the maintenance required for those classes would become overwhelming.
  • The "neural network" was limited in the sense that it wasn't terribly exciting to watch.

So... I thought long and hard about what I could do. And I had to really think about how all the AI components fitted together and how each had their own separate task.

Conclusion: I will be rewriting it from scratch. It sounds like a big task but actually I'm reducing the size of the problem by making it simpler and more effective. I will not be scrapping DecisionSystem -- which, if you recall, actually does most of the legwork currently (the AI-specific stuff was only very early stage.) But instead of the neural network competing with the DecisionSystem, vying for the authority to make decisions for the AI (which is, in this context, by and large a qualitative problem not a quantitative one) the neural network is going to be applied to the AI's fighting style and preference towards certain moves (out of a pool of the moves it knows.) I am also going to be implementing this artificial neural network in a closer-to-standard way with neurones and weighted dendrites but it will still be slightly unconventional in its usage.

I'm not sure if neural networks have been used in this way before. Maybe it's a hybrid... But in this way, the problem is made simpler for the neural network as analysis of the output with respect to the effect on the world is not necessary. Only forward inputs (from the world) are required, and weights are adjusted on the fly based on those inputs. The neurons themselves only need to work in one mode and there is no need to calculate error derivatives such as required in the back-propagation algorithm.

In a sense I'm downgrading the goals of the neural network to something that makes more sense and it should work seamlessly. I've already begun the coding and it's going smoothly.

Learning will still be apparent and should actually be easier to demonstrate. This is done through the KnowledgeBank class as well as the NeuralNetwork (they are tightly coupled.)

Our main goals for the AI are to provide a computer-controlled player that improves over time with direct relation to its experiences and I believe this system will achieve that to a greater level of success.

No comments:

Post a Comment