Training of Neural Indicators

Top  Previous  Next

Our Neural Indicators are neural nets that are not trained as neural nets are usually trained. This may be confusing to some, but becomes more clear when you realize that training methods for neural nets (such as backprop and TurboProp 2) are simply optimization techniques designed to find an optimal set of weights.

 

Genetic algorithms are optimization techniques, too, and we are applying the genetic algorithm optimizer to the task of finding weights for the neural indicator nets. The training sets for these indicators are therefore the set of bars over which the optimizer optimizes. Just as with any use of the optimizer, you should consider this set "in sample".  Any use of the neural indicators during the subsequent backtest is considered "out of sample".