Skip to content

janicky/multilayer-perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multilayer Perceptron

A multilayer perceptron (MLP) is a class of feedforward artificial neural network. An MLP consists of at least three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. ~Wikipedia

Configuration (Configurator class)

Configurator constructor:

    Configurator cfg = new Configurator(int input_count, int[] layers);

Layers structure (example):

    int[] layers = new int[] {
	    3, // second layer (hidden) -> 3 neurons
	    4, // third layer (hidden) -> 4 neurons,
	    2, // fourth layer (output) -> 2 neurons
    }

Last element - output layer.

1st layer (input) - input_count
2nd layer (hidden) - layers[1]
3rd layer (hidden) - layers[2]
...
n layer (output) - layers[n - 1]

Options:

  • setInput(double[])
  • setExpected(double[])
  • setRange(double, double) default: (-0.5, 0.5)
  • setLearningFactor(double) default: (0.8)
  • setMomentum(double) default: (0.2)
  • setBias(boolean) default: (true)
  • setInputRotation(boolean) default: (true)
  • setEpochs default: 1000
  • setErrorLogStep default: 10
  • setError default: 0.01

About

Lodz University of Technology Project (Intelligent Data Analysis)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages