Layers Module

This module implements a layer class for an artficial neural network.

Layer Class

A layer comprises a list of nodes and behaviors appropriate for their place in the hierarchy. A layer_type can be either 'input', 'hidden', or 'output'.

def __init__(self, layer_no, layer_type):

The layer class initializes with the layer number and the type of layer. Lower layer numbers are toward the input end of the network, with higher numbers toward the output end.

def total_nodes(self, node_type=None):

This function returns the total nodes. It can also return the total nodes of a particular type, such as 'copy'.

def unconnected_nodes(self):

This function looks for nodes that do not have an input connection.

def values(self):

This function returns the values for each node as a list.

def activations(self):

This function returns the activation values for each node as a list.

def set_activation_type(self, activation_type):

This function is a mechanism for setting the activation type for an entire layer. If most nodes need to one specific type, this function can be used, then set whatever nodes individually after this use.

def add_nodes(self, number_nodes, node_type, activation_type=None):

This function adds nodes in bulk for initialization.

If an optional activation type is passed through, that will be set for the nodes. Otherwise, the default activation type for the layer will be used.

def add_node(self, node):

This function adds a node that has already been formed. Since it can originate outside of the initialization process, the activation type is assumed to be set appropriately already.

def get_node(self, node_no):

This function returns the node associated with the node_no. Although it would seem to be reasonable to look it up by position within the node list, because sparse nodes are supported, there might be a mis-match between node_no and position within the list.

def get_nodes(self, node_type=None):

This function returns all the nodes of a layer. Optionally it can return all of the nodes of a particular type, such as 'copy'.

def connect_layer(self, lower_layer):

This function accepts a lower layer within a network and for each node in that layer connects the node to nodes in the current layer.

An exception is made for bias nodes. There is no reason to connect a bias node to a lower layer, since it always produces a 1.0 for its value and activation.

def load_inputs(self, inputs):

This takes a list of inputs that applied sequentially to each node in the input_layer

def load_targets(self, targets):

This takes a list of targets that applied sequentially to each node in the output_layer

def randomize(self, random_constraint):

This function builds random weights for all the input connections in the layer.

def feed_forward(self):

This function loops through the nodes on the layer and causes each node to feedforward values from nodes below that node.

def update_error(self, halt_on_extremes):

This function loops through the nodes on the layer and causes each node to update errors as part of the back propagation process.

def adjust_weights(self, learnrate, halt_on_extremes):

This function loops through the nodes causing each node to adjust the weights as a result of errors and the learning rate.

def get_errors(self):

This function returns a list of the error with each node.

def get_weights(self):

This function returns a list of the weights of input connections into each node in the layer.