Recurrent Neural Network - Architectures - Elman Networks and Jordan Networks

Elman Networks and Jordan Networks

The following special case of the basic architecture above was employed by Jeff Elman. A three-layer network is used (arranged vertically as x, y, and z in the illustration), with the addition of a set of "context units" (u in the illustration). There are connections from the middle (hidden) layer to these context units fixed with a weight of one. At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multilayer perceptron.

Jordan networks, due to Michael I. Jordan, are similar to Elman networks. The context units are however fed from the output layer instead of the hidden layer. The context units in a Jordan network are also referred to as the state layer, and have a recurrent connection to themselves with no other nodes on this connection. Elman and Jordan networks are also known as "simple recurrent networks" (SRN).

Read more about this topic:  Recurrent Neural Network, Architectures

Famous quotes containing the words networks and/or jordan:

    The great networks are there to prove that ideas can be canned like spaghetti. If everything ends up by tasting like everything else, is that not the evidence that it has been properly cooked?
    Frederic Raphael (b. 1931)

    Let me just say, at once: I am not now nor have I ever been a white man. And, leaving aside the joys of unearned privilege, this leaves me feeling pretty good ...
    —June Jordan (b. 1936)