This module implements a layer class for an artficial neural network.
A layer comprises a list of nodes and behaviors appropriate for their place in the hierarchy. A layer_type can be either 'input', 'hidden', or 'output'.
The layer class initializes with the layer number and the type of layer. Lower layer numbers are toward the input end of the network, with higher numbers toward the output end.
This function returns the total nodes. It can also return the total nodes of a particular type, such as 'copy'.
This function looks for nodes that do not have an input connection.
This function returns the values for each node as a list.
This function returns the activation values for each node as a list.
This function is a mechanism for setting the activation type for an entire layer. If most nodes need to one specific type, this function can be used, then set whatever nodes individually after this use.
This function adds nodes in bulk for initialization.
If an optional activation type is passed through, that will be set for the nodes. Otherwise, the default activation type for the layer will be used.
This function adds a node that has already been formed. Since it can originate outside of the initialization process, the activation type is assumed to be set appropriately already.
This function returns the node associated with the node_no. Although it would seem to be reasonable to look it up by position within the node list, because sparse nodes are supported, there might be a mis-match between node_no and position within the list.
This function returns all the nodes of a layer. Optionally it can return all of the nodes of a particular type, such as 'copy'.
This function accepts a lower layer within a network and for each node in that layer connects the node to nodes in the current layer.
An exception is made for bias nodes. There is no reason to connect a bias node to a lower layer, since it always produces a 1.0 for its value and activation.
This takes a list of inputs that applied sequentially to each node in the input_layer
This takes a list of targets that applied sequentially to each node in the output_layer
This function builds random weights for all the input connections in the layer.
This function loops through the nodes on the layer and causes each node to feedforward values from nodes below that node.
This function loops through the nodes on the layer and causes each node to update errors as part of the back propagation process.
This function loops through the nodes causing each node to adjust the weights as a result of errors and the learning rate.
This function returns a list of the error with each node.
This function returns a list of the weights of input connections into each node in the layer.