When present the input and the output patterns
2 role/filler
probe | sentence gestalt | ||
---|---|---|---|
hidden layer | |||
input word | context |
The approach presented here takes another direction, consistent with the principle of gradual language evolution and learning, by processing and evolving language items of increasing complexity. Another problem with the solution presented by Gasser is that because the packing and unpacking processes are split, this method requires the training sequences during both learning tasks, which is less plausible and increases the learning time. The sequential autoassociative task in my approach requires only a short-term memory to keep the sequence to be presented for a second time at the output layer and the system only has to perceive the input the environment provides throughout the learning (which may last indefinitely: we can keep the learning going non-stop).
© 2001 by CRC Press LLC
© 2001 by CRC Press LLC