Since a large property of a neural network is indeed its initial or current random state, maybe we can consider this entire state as just a vector of randomly generated values. In reality, that’s what it is on the computer. Each of these values is generated via a random number generator in a specific sequence. A question that I want to examine is whether or not the nature of the random number generator effects a network’s capacity to learn. I’m going to write a series of random number generators and see if they affect learning capacity in some way. One generator I would like to make or find, which I have only just thought up, is a pi digit generator, where each new digit represents a random number. The idea is that maybe instead of always doing back propagation, and trial and error, maybe there is a way to generate intelligent network initial random state via specific random number initialization. I think, each digit of pi can be viewed as a random number since it is non-repeating, and given any series of values, and random, I think, if you don’t consider the digit where you started counting. At the same time, the initial or starting digit of pie is the random seed which can be use deterministically. Do random number generators affect network intelligence? Where does this go? We will see, maybe.