This module implements the components for an artficial neural network.
This class implements a standard multi-layered perceptron (MLP).
Because there are a number of parameters to specify, there are no specific variables that are initialized within __init__.
This function initializes the layers. The variables:
The initial network is created, and then a series of modifications can be made to enable recurrent features. recurrent_mods are configurations for modifications to the neural network that is created within init_layers.
For example, if init_layers(input_nodes, total_hidden_nodes_list, output_nodes, ElmanSimpleRecurrent()) was used, then the initial network structure of input, hidden, and output nodes would be created. After that, the additional copy or context nodes that would automatically transfer values from the lowest hidden layer would be added to the input layer.
More than one recurrent scheme can be applied, each one adding to the existing network.
This function sets the flag as to whether the program should halt when experiencing extremely positive or negative numbers. This can happen when using linear functions and data that may not be normalized. Such things as nan and inf can be experienced otherwise. Not halting instead, simply scales back the values to LARGEVALUE_LIMIT and NEGVALUE_LIMIT. A satisfactory output from the network may be in doubt, but at least it gives it the possibility.
This function returns the True/False flag for halting on extremes.
This fuction sets a value between 0 and 1 for limiting the random weights upon initialization. For example, .8 would limit weights to -.8 through .8.
This function gets the random constraint used in weights initialization.s
This function sets the number of epochs or cycles through the learning data.
This function gets the number of epochs that will run during learning.
This function sets a value for time delayed data. For example, is the time delay was 5, then input values would be taken 5 at a time. Upon the next increment the next input values would be 5, with 4 of the previous values included, and one new value.
This function gets the time delay to be used with timeseries data.
This function sets the inputs. Inputs are basically treated as a list.
This function sets the targets.
This function sets the learn rate for the modeling. It is used to determine how much weight to associate with an error when learning.
This function gets the learn rate for the modeling. It is used to determine how much weight to associate with an error when learning.
This function sets the data positions by type
This function sets the range within the data that is to used for learning.
This function gets the range within the data that is to used for learning.
This function checks the position or index of the data and determines whether the position is consistent with the time delay that has been set.
This function is a generator for learning data. It is assumed that in many cases, this function will be over-written with a situation specific function.
This function is a generator for validation data. It is assumed that in many cases, this function will be over-written with a situation specific function.
This function is a generator for testing data. It is assumed that in many cases, this function will be over-written with a situation specific function.
This function gets an input from the list of all inputs.
This function accepts integers representing a starting and ending position within a set of data and yields a position number in a random fashion until all of the positions have been exhausted.
This function evaluates validates, somewhat, start and end positions for data ranges.
This function sets the start position and ending position for the validation range. The first test period is often used to test the current weights against data that is not within the learning period after each epoch run.
This function gets the start position and ending position for the validation range. The first test period is often used to test the current weights against data that is not within the learning period after each epoch run.
This function sets the start position and ending position for the out-of-sample range.
This function gets the start position and ending position for the out-of-sample range.
Init connections sets up the linkages between layers.
This function connects all nodes, which is typically desirable However, note that by substituting in a different process, a sparse network can be achieved. And, there is no restriction to connecting layers in a non-traditional fashion such as skip-layer connections.
Generates connections to the lower layer.
If it is the input layer, then it's skipped It could raise an error, but it seems pointless.
This function randomizes the weights in all of the connections.
This function performs the process of feeding into the network inputs and targets, and computing the feedforward process. After the feedforward process runs, the actual values calculated by the output are compared to the target values. These errors are then used by the back propagation process to adjust the weights for the next set of inputs. If a recurrent netork structure is used, the stack of copy levels is pushed with the latest set of hidden nodes.
Then, the next set of inputs is input.
When all of the inputs have been processed, resulting in the completion of an epoch, if show_epoch_results=True, then the MSE will be printed.
Finally, if random_testing=True, then the inputs will not be processed sequentially. Rather, the inputs will be sorted into a random order and then input. This is very useful for timeseries data to avoid autocorrelations.
This function loads and feedforwards the network with validation data. Optionally, it can also store the actuals as well.
This function loads and feedforwards the network with test data. Optionally, it can also store the actuals as well.
This function loads and feedforwards the network with data. eval_type is either validation or test data ('t' or 'v') Optionally, it can also store the actuals as well.
This function calculates mean squared errors.
Accepts inputs and targets, then forward and back propagations. A comparison is then made of the generated output with the target values.
Note that this is for an incremental learn, not the full set of inputs and examples.
This function starts with the first hidden layer and gathers the values from the lower layer, applies the connection weightings to those values, and activates the nodes. Then, the next layer up is selected and the process is repeated; resulting in output values in the upper-most layer.
Backpropagate the error through the network. Aside from the initial compution of error at the output layer, the process takes the top hidden layer, looks at the output connections reaching up to the next layer, and carries the results down through each layer back to the input layer.
This function goes through layers starting with the top hidden layer and working its way down to the input layer.
At each layer, the errors are updated in the nodes from the errors and weights in connections to nodes in the upper layer.
This function sets the node errors to zero in preparation for back propagation.
This function goes through layers starting with the top hidden layer and working its way down to the input layer.
At each layer, the weights are adjusted based upon the errors.
The mean squared error (MSE) is a measure of how well the outputs compared to the target values.
This function advances the copy node values, transferring the values from the source node to the copy node. In order to avoid stomping on the values that are to be copies, it goes from highest node number to lowest.
No provision is made at this point to exhaustively check precedence.
This function loads a layer and nodes from the input file. Note that it does not load the connections for those nodes here, waiting until all the nodes are fully instantiated. This is because the connection objects have nodes as part of the object.
This function receives a node id, parses it, and returns the node in the network to which it pertains. It implies that the network structure must already be in place for it to be functional.
This function instantiates a connection based upon the string loaded from the input file. Ex. node-1:0, 0.166366874487
This function instantiates a source node.
This function parses the node_id received from the input file. Format of node id: 'node-%s:%s' % (layer_no, node_no)
Returns layer_no and node_no
This function loads a file that has been saved by save funtion. It is designed to be used when implementing a run-time version of the neural network.
This function outputs the values of the network. It is meant to be sufficiently complete that, once saved to a file, it could be loaded back from that file completely to function.
To accommodate configparser, which is used as the file format, there is a form of [category], label = value
Since there are also no sub-categories possible, so the naming structure is designed to take that into account. This accommodation also leads to a couple design choices: Each layer is given a separate category and a list of nodes follows. Then each node has a separate category identifying it by layer number and node number. This can't be inferred from just knowing the number of nodes in the layer and sequentially reading, because if a sparse network is used, then the node numbers may be out of sync with the position of the node within of the layer.
This function receives a node, and returns an text based id that uniquely identifies it within the network.
This function saves the network structure to a file.