Neural Networks and Constants

In my research, it is possible to train a network and have it learn about a constant such as PI, and put it to use in its function. However, if we pass PI in as an input, rather than having to ‘teach’ PI, then training is worlds faster. The network merely learns to use PI rather than derive it and use it simultaneously (though this is possible). It seems like if there are any sort of constants which could be useful to the function, then they should be passed in. Maybe this can be extended to full equations. Rather than the network learning to derive a given equation or function, those definitions can be passed into the network along with constants and other inputs instead. In this way, networks can be ‘educated’.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: