The smallest xor network and lottery ticket

I am creating networks that are: two input nodes, x*y hidden nodes, and one output node. The ability to create these networks has to do with some odds which are based on the initial state of the starting network pool. I have a pool of 2000 networks which are randomly generated, and I try toContinue reading “The smallest xor network and lottery ticket”

Non-Convolutional Image Recognition

I have had some difficulty determining numbers for training times for mnist, so I am going to post some of mine, and also discuss what my network is doing. So far in my work on mnist, I have generated a convergent network. Using cpu only, ryzen 7 1700, I train a convergent network in underContinue reading “Non-Convolutional Image Recognition”

Visualizing network training

Here is a video of where my project is at, Basically, it provides a graphical visualization of the network weights. I am going to experiment with generating and examining this sort of image through training. In this video, the network is training on six problems, and it slowly learns for about 45 seconds. After that,Continue reading “Visualizing network training”

Intelligent sequences of numbers

Since a large property of a neural network is indeed its initial or current random state, maybe we can consider this entire state as just a vector of randomly generated values. In reality, that’s what it is on the computer. Each of these values is generated via a random number generator in a specific sequence.Continue reading “Intelligent sequences of numbers”

Configurations of random variables

In my neural network program, I refactored some code and produced an error which I did not notice for some time. When I would run the program, eventually, out of 100 networks, a single or few networks would learn the pathing problem. With no difference in how they are trained, they are all taught exactlyContinue reading “Configurations of random variables”

Dumb Neural Networks

Lottery Ticket Theory says that some networks do not train well if at all, based on random properties of the network. In my current project, I have 100 agents, each with their own neural network, same shape. Each of these networks trains exactly the same way, but some end up acting smart and others dumb.Continue reading “Dumb Neural Networks”

Neural Networks and Constants

In my research, it is possible to train a network and have it learn about a constant such as PI, and put it to use in its function. However, if we pass PI in as an input, rather than having to ‘teach’ PI, then training is worlds faster. The network merely learns to use PIContinue reading “Neural Networks and Constants”

Lottery Ticket Hypothesis in action

I have written an application which creates pools of 1000 neural networks. One test performs backpropagation training on them. A second test performs backpropagation, and a genetic algorithm. The amount of times training is called for each test is the same. The genetic algorithm seems to actually be able to converge on a lottery ticketContinue reading “Lottery Ticket Hypothesis in action”

My AI study so far

I have been studying neural networks for some time, and recently during a YSU hackthon, I managed to make interesting progress. After about a year long break, I return to this code and make large amounts of progress and a number of topics have presented in C++ software. I’m going to describe some of myContinue reading “My AI study so far”