Non-Convolutional Image Recognition

I have had some difficulty determining numbers for training times for mnist, so I am going to post some of mine, and also discuss what my network is doing. So far in my work on mnist, I have generated a convergent network. Using cpu only, ryzen 7 1700, I train a convergent network in underContinue reading “Non-Convolutional Image Recognition”

Intelligent sequences of numbers

Since a large property of a neural network is indeed its initial or current random state, maybe we can consider this entire state as just a vector of randomly generated values. In reality, that’s what it is on the computer. Each of these values is generated via a random number generator in a specific sequence.Continue reading “Intelligent sequences of numbers”

Dumb Neural Networks

Lottery Ticket Theory says that some networks do not train well if at all, based on random properties of the network. In my current project, I have 100 agents, each with their own neural network, same shape. Each of these networks trains exactly the same way, but some end up acting smart and others dumb.Continue reading “Dumb Neural Networks”

Lottery Ticket Hypothesis in action

I have written an application which creates pools of 1000 neural networks. One test performs backpropagation training on them. A second test performs backpropagation, and a genetic algorithm. The amount of times training is called for each test is the same. The genetic algorithm seems to actually be able to converge on a lottery ticketContinue reading “Lottery Ticket Hypothesis in action”

My AI study so far

I have been studying neural networks for some time, and recently during a YSU hackthon, I managed to make interesting progress. After about a year long break, I return to this code and make large amounts of progress and a number of topics have presented in C++ software. I’m going to describe some of myContinue reading “My AI study so far”