This module implements the nodes for an artficial neural network.
This class is the prototype for nodes. Nodes are the holder of values, they activate and they maintain connnections to other nodes.
This function initializes the internal values of the node. Since the class is a prototype, much of this is overridden with the actual classes.
This function returns the value of the node. This is the value prior to activation.
This is a stub function. Activations will vary by node.
This is a stub function.
This function applies the activation function to the value of the node.
This function computes the error function, typically the derivative of the error.
This function assigns a random value to the input connections. The random constraint limits the scope of random variables.
This function returns the activation type of the node.
This function updates the error of the node from upstream errors.
Depending upon halting on extremes, it also may adjust or halt if overflows occur.
Finally, it computes the derivative of the activation type, and modifies the error.
This function goes through each of the input connections to the node and updates the lower nodes.
The error from the current node is multiplied times the connection weight, inspected for bounds limits and posted in the lower node's error.
This class implements normal nodes used in the network. The node type is specified, and must be in [ACTIVATION_SIGMOID, ACTIVATION_TANH, ACTIVATION_LINEAR].
This function initializes the node type.
This function sets the activation type for the node. Currently available values are ACTIVATION_SIGMOID, ACTIVATION_TANH, ACTIVATION_LINEAR. When specifying the activation type, the corresponding derivative type for the error functions are assigned as well.
This function sets the error function type.
Set value used to avoid the accidental use of setting a value on a bias node. The bias node value is always 1.0.
This function returns the internal value of the node.
This function walks the input connections, summing gets the lower node activation values times the connection weight. Then, node is activated.
This function adds an input connection. This is defined as a connection that comes from a layer on the input side, or in this applicaion, a lower number layer.
The reason that there is a specific function rather than using just an append is to avoid accidentally adding an input connection to a bias node.
This function adjusts incoming weights as part of the back propagation process, taking into account the node error. The learnrate moderates the degree of change applied to the weight from the errors.
This function accepts the learn rate, the activated value received from a node connected from below, and the current error of the node.
It then multiplies those altogether, which is an adjustment to the weight of the connection as a result of the error.
This class maintains the form used for copy nodes in recurrent networks. The copy nodes are used after propagation. The values from nodes in upper layers, such as the hidden nodes are copied to the CopyNode. The source_node defines the node from where the value arrives.
An issue with using copy nodes, is that you must be careful to adhere to a sequence when using the nodes. For example, if a copy node value is a source to another copy node, you will want to copy the values from downstream nodes first.
This function initializes the node and sets up initial values for weights copied to it.
Sets the source of previous recurrent values.
Gets the source of previous recurrent values.
This function transfers the source node value to the copy node value.
This function gets the type of source value to use.
Source type will be either 'a' for the activation value or 'v' for the summed input value.
This function gets the value that will be multiplied times the incoming source value.
This function gets the value that will be multiplied times the existing value.
This function accepts parameters governing what the source information is used, and how the incoming and existing values are discounted.
Source type can be either 'a' for the activation value or 'v' for the summed input value.
By setting the existing weight to zero, and the incoming discount to 1.0. An Elman style update takes place.
By setting the existing weight to some fraction of 1.0 such as .5, a Jordan style update can take place.
Bias nodes provide value because of their connections, and their value and activation is always 1.0.
This function initializes the node, sets the type, and sets the return value to always 1.0.
The activation of the bias node is always 1.0.
The activation of the bias node is always 1.0.
Connection object that holds the weighting information between nodes as well as a reference to the nodes that are connected.
The lower_node lives on a lower layer, closer to the input layer. The upper mode lives on a higher layer, closer to the output layer.
This function sets the weight of the connection, which relates to the impact that a lower node's activation will have on an upper node's value.
This function adds to the weight of the connection, which is proportional to the impact that a lower node's activation will have on an upper node's value.
This function sets the weight of the connection, which is relates to the impact that a lower node's activation will have on an upper node's value.
Calculates the sigmoid .
Calculates the derivative of the sigmoid for the value.
This function calculates the hyperbolic tangent function.
This function calculates the tanh derivative of the value.
This function simply returns the value given to it.
This function returns 1.0. Normally, I would just return 1.0, but pylint was complaining.