A simple feed-forward neural network that uses back-propgation for training.
$ pip install --target=. plotly
$ python main.py <configuration_file:string> <training_data_file:string> <training_iterations:integer> <show_visualization:boolean> <start_pos:integer> <end_pos:integer> <steps:integer>
configuration_file
: the configuration file to represent the networktraining_data_file
: the training data file to train the networktraining_iterations
: a number representing the maximum number of training iterationsshow_visualization
: a boolean whether to generate a visualization of the trained networkstart_pos
: (required if show visualization is true) the start position of thex
,y
coordinatesend_pos
: (required if show visualization is true) the end position of thex
,y
coordinatessteps
: (required if show visualization is true) the number of steps between each point
Examples:
$ python main.py configs/single-layer.json training-data/xor.json 100 false
$ python main.py configs/single-layer.json training-data/xor.json 4000 true -1 2 40
The /configs
directory contains network configurations. Each object contains
data to dynamically generate a network. The number_of_inputs
corresponds to
the number of inputs the network should accept. The config
array corresponds
to the entire network, each element is a layer and each element's value is the
number of perceptrons for that layer. The last element should always be 1
as
the network assumes there is a single output perceptron. The initial_weights
corresponds to the initial weights for all of the connections. It's a 3-dimensional
array. The first dimension is the entire network and each element is a single layer,
similar to the config
. The second dimension is a layer and each element is
a single perceptron in that layer. The third dimension is an individual perceptron
and each element corresponds to a connection. The number of weights should be
the length(previous_layer) + 1
. This is because it needs a connection to every
previous perceptron and a bias perceptron. The last weight is for the bias perceptron.
Order does matter, the first element will correspond to the first connection,
or the first perceptron in the pervious layer, etc. The first hidden layer's previous
layer will be the inputs, or number_of_inputs + 1
.
The /training-data
directory contains examples of training data. Each object
contains a data
key that corresponds to an array of examples. Each example
contains the inputs
for each input perceptron and the expected_output
from
the output perceptron. The network assumes there is only one output perceptron.
Training Data:
and.json
: the AND function (&&
)or.json
: the OR function (||
)xor.json
: the XOR functionnxor.json
: the NXOR functionx2.json
: the X^2 function
Executing $ python main.py configs/single-layer.json training-data/xor.json 4000 true -1 2 40
produces something like the following. It creates a network with
2 input nodes, a hidden layer with 4 perceptrons and single output perceptron.
Provided with the 4 training examples for the XOR function, with a maximum iteration
of 4000
for training and graphing from (-1, -1)
to (2, 2)
with 20
steps per
unit produces the following visualization.
Executing $ python main.py configs/multi-layer.json training-data/x2.json 1000 true -1 1 50
produces something like the following. It creates a network with 2
input nodes, a hidden layer with 4 perceptrons, a second hidden layer with 4
perceptrons and single output perceptron.
Provided with the training examples for the x^2 function, with a maximum iteration
of 1000
for training and graphing from (-1, -1)
to (1, 1)
with 50
steps per
unit produces the following visualization.