Nodes Module

This module implements the nodes for an artficial neural network.

ProtoNode Class

This class is the prototype for nodes. Nodes are the holder of values, they activate and they maintain connnections to other nodes.

def __init__(self):

This function initializes the internal values of the node. Since the class is a prototype, much of this is overridden with the actual classes.

def get_value(self):

This function returns the value of the node. This is the value prior to activation.

def _activate(value):

This is a stub function. Activations will vary by node.

def _error_func(value):

This is a stub function.

def activate(self):

This function applies the activation function to the value of the node.

def error_func(self, value):

This function computes the error function, typically the derivative of the error.

def randomize(self, random_constraint=RANDOM_CONSTRAINT):

This function assigns a random value to the input connections. The random constraint limits the scope of random variables.

def get_activation_type(self):

This function returns the activation type of the node.

def update_error(self, halt_on_extremes):

This function updates the error of the node from upstream errors.

Depending upon halting on extremes, it also may adjust or halt if overflows occur.

Finally, it computes the derivative of the activation type, and modifies the error.

def _update_lower_node_errors(self, halt_on_extremes):

This function goes through each of the input connections to the node and updates the lower nodes.

The error from the current node is multiplied times the connection weight, inspected for bounds limits and posted in the lower node's error.

Node Class

This class implements normal nodes used in the network. The node type is specified, and must be in [ACTIVATION_SIGMOID, ACTIVATION_TANH, ACTIVATION_LINEAR].

def __init__(self, node_type=None):

This function initializes the node type.

def set_activation_type(self, activation_type):

This function sets the activation type for the node. Currently available values are ACTIVATION_SIGMOID, ACTIVATION_TANH, ACTIVATION_LINEAR. When specifying the activation type, the corresponding derivative type for the error functions are assigned as well.

def _set_error_func(self, activation_type):

This function sets the error function type.

def set_value(self, value):

Set value used to avoid the accidental use of setting a value on a bias node. The bias node value is always 1.0.

def get_value(self):

This function returns the internal value of the node.

def feed_forward(self):

This function walks the input connections, summing gets the lower node activation values times the connection weight. Then, node is activated.

def add_input_connection(self, conn):

This function adds an input connection. This is defined as a connection that comes from a layer on the input side, or in this applicaion, a lower number layer.

The reason that there is a specific function rather than using just an append is to avoid accidentally adding an input connection to a bias node.

def adjust_weights(self, learnrate, halt_on_extremes):

This function adjusts incoming weights as part of the back propagation process, taking into account the node error. The learnrate moderates the degree of change applied to the weight from the errors.

def _adjust_weight(learnrate, activate_value, error):

This function accepts the learn rate, the activated value received from a node connected from below, and the current error of the node.

It then multiplies those altogether, which is an adjustment to the weight of the connection as a result of the error.

CopyNode Class

This class maintains the form used for copy nodes in recurrent networks. The copy nodes are used after propagation. The values from nodes in upper layers, such as the hidden nodes are copied to the CopyNode. The source_node defines the node from where the value arrives.

An issue with using copy nodes, is that you must be careful to adhere to a sequence when using the nodes. For example, if a copy node value is a source to another copy node, you will want to copy the values from downstream nodes first.

def __init__(self):

This function initializes the node and sets up initial values for weights copied to it.

def set_source_node(self, node):

Sets the source of previous recurrent values.

def get_source_node(self):

Gets the source of previous recurrent values.

def load_source_value(self):

This function transfers the source node value to the copy node value.

def get_source_type(self):

This function gets the type of source value to use.

Source type will be either 'a' for the activation value or 'v' for the summed input value.

def get_incoming_weight(self):

This function gets the value that will be multiplied times the incoming source value.

def get_existing_weight(self):

This function gets the value that will be multiplied times the existing value.

def source_update_config(self, source_type, incoming_weight, existing_weight):

This function accepts parameters governing what the source information is used, and how the incoming and existing values are discounted.

Source type can be either 'a' for the activation value or 'v' for the summed input value.

By setting the existing weight to zero, and the incoming discount to 1.0. An Elman style update takes place.

By setting the existing weight to some fraction of 1.0 such as .5, a Jordan style update can take place.

BiasNode Class

Bias nodes provide value because of their connections, and their value and activation is always 1.0.

def __init__(self):

This function initializes the node, sets the type, and sets the return value to always 1.0.

def activate(value=None):

The activation of the bias node is always 1.0.

def error_func(value=1.0):

The activation of the bias node is always 1.0.

Connection Class

Connection object that holds the weighting information between nodes as well as a reference to the nodes that are connected.

def __init__(self, lower_node, upper_node, weight=0.0):

The lower_node lives on a lower layer, closer to the input layer. The upper mode lives on a higher layer, closer to the output layer.

def set_weight(self, weight):

This function sets the weight of the connection, which relates to the impact that a lower node's activation will have on an upper node's value.

def add_weight(self, weight):

This function adds to the weight of the connection, which is proportional to the impact that a lower node's activation will have on an upper node's value.

def get_weight(self):

This function sets the weight of the connection, which is relates to the impact that a lower node's activation will have on an upper node's value.

def sigmoid(value):

Calculates the sigmoid .

def sigmoid_derivative(value):

Calculates the derivative of the sigmoid for the value.

def tanh(value):

This function calculates the hyperbolic tangent function.

def tanh_derivative(value):

This function calculates the tanh derivative of the value.

def linear(value):

This function simply returns the value given to it.

def linear_derivative(value):

This function returns 1.0. Normally, I would just return 1.0, but pylint was complaining.