Common Parameters of the Nets

Top  Previous  Next

Each of the nets has a number of parameters which you will want the optimizer to find. The parameters that are common to all nets are described here.

 

Scale

The scale parameter is your specification of how far back you want the NI to look to compute the input scaling. All inputs are scaled (or normalized) into the same small range so that those with larger values (e.g., volume) do not exert more influence on the network than those whose values are smaller (e.g., percent change in close). Weights will scale themselves somewhat (adjust to smaller values) to compensate for smaller magnitude differences. However, large differences in magnitude have to be handled by scaling.

 

The scale parameter should be set to integer values 10 or higher. If the scale parameter is 55, this means that to scale a given bar, the 55 bars just previous to that bar are used for scaling.

 

The scale parameter, by virtue of the fact that it is a moving window, also normalizes over time in each input as well as normalizing over all inputs at a fixed point in time. As an input such as price rises, the scaling window moves with it. This means that, unlike most neural networks, including the neural network in the Prediction Wizard, the same input values can give different outputs later in time when the markets are different, as youd want them to do. Larger values of scale decrease this normalization, while smaller values increase it. Let the optimizer decide!

 

Input1, Input2, etc.

These are the variables that you believe are good things on which the network can base its buy or sell decision. They can be virtually any other indicators or ratios. Most NI nets generate their signals based upon the current values of these inputs, not previous values. The exceptions are the Recurrent Nets, which not only look at current values, but a condensed representation of several previous values as well.

 

w1, w2, etc.

These are the weights that the neural nets will use to make their decisions internally. Recommended values are in the range –1 to 1. You should not try to set them yourself. Let the optimizer do it, but make sure the range is –1 to 1.

 

h1 and h2

This is the parameter which determines the activation function used in the respective hidden neurons in Ward Nets. Set to 0 for hyperbolic tangent. Set to 1 for Gaussian. Recommended optimizer range is 0 to 1.