This module implements various approaches to recurrence.
Elman Simple Recurrent Network:
Jordan
NARX Non-Linear AutoRegressive with eXogenous inputs
This is the base class for recurrent modifications. It is not intended to be used directly.
This function initializes the configuration class.
This function modifies the neural net that is passed in by taking the parameters that have been set in this class. By having _apply_config, subclassed versions of apply_config can take multiple passes with less code.
This function actually does the work.
This function creates connections to each of the upper nodes.
This is a separate function from the one in layers, because using this version does not require ALL of the nodes on a layer to be used.
This function is a stub for getting the appropriate source nodes.
This function is a stub for getting the appropriate nodes to which the copy nodes will connect.
This class implements a process for converting a standard neural network into an Elman Simple Recurrent Network. The following is used to define such a configuration: Source nodes are nodes in the hidden layer. One level of copy nodes is used, in this situation referred to as context units. The source value from the hidden node is the activation value and the copy node (context) activation is linear; in other words simply a copy of the activation. The source value replaces any previous value.
In the case of multiple hidden layers, this class will take the lowest hidden layer.
The class defaults to context nodes being fully connected to nodes in the hidden layer.
This function initializes the weights and default connection type consistent with an Elman Network.
This function returns the hidden nodes from layer 1.
This class implements a process for converting a standard neural network into an Jordan style recurrent metwork. The following is used to define such a configuration:
The source value from the output node is the activation value and the copy node (context) activation is linear; in other words simply a copy of the activation.
The source value is added to the slightly discounted previous copy value. So, the existing weight is some value less than 1.0 and greater than zero.
In the case of multiple hidden layers, this class will take the lowest hidden layer.
The class defaults to context nodes being fully connected to nodes in the output layer.
Initialization in this class means passing the weight that will be multiplied time the existing value in the copy node.
This function returns the output nodes.
This class implements a process for converting a standard neural network into a NARX (Non-Linear AutoRegressive with eXogenous inputs) recurrent network.
It also contains some modifications suggested by Narendra and Parthasathy (1990).
Source nodes can come from outputs and inputs. There can be multiple levels of copies (or order in this nomenclature) from either outputs or inputs.
The source value can be weighted fully, or the incoming weight adjusted lower.
This class applies changes to the neural network by first applying the configurations related to the output nodes and then to the input nodes.
This function takes: the output order, or number of copy levels of output values, the weight to apply to the incoming values from output nodes, the input order, or number of copy levels of input values, the weight to apply to the incoming values from input nodes
This function returns either the output nodes or input nodes depending upon self._node_type.
This function first applies any parameters related to the output nodes and then any with the input nodes.