The computational burden correspondingly increases
preset threshold. Other criteria can be used to terminate the evolution of the simplex, one such criterion being when the difference in error falls below a preset threshold.
As noted earlier, the only undesirable feature of this training scheme is that as the size of the simplex (number of simplex points) increases, the computational burden correspondingly increases. This however is not unique to the present training scheme since the size of the simplex, viz. (N+1), depends on the size of the weight vector, which in turn is a function of the total number of interconnections in the neural net, and it is rather well known that the training complexity increases with the size of the neural network. In an attempt to reduce the training complexity, one may place arbitrary limits on the number of interconnections, which however is not attractive. Some reduction in the overall training complexity without arbitrarily limiting the network size can be achieved by partitioning the neural network into a linear and a nonlinear portion, with the nonlinear portion comprising the connections between the input nodes and the hidden nodes while the linear portion consists of the connections between the hidden nodes and the output nodes (an example of which is to have the network outputs formed as a weighted sum of the outputs of the hidden nodes). The simplex optimization is then performed to find the optimal weights in the nonlinear portion, while a linear least squares minimization is used to determine the optimal weights in the linear portion of the network.
© 2001 by CRC Press LLC