Skip to content

Implement batchnorm layer #155

Open
Open
@milancurcic

Description

@milancurcic

Originally requested by @rweed in #114.

A batch normalization is possibly the next most widely used layer after dense, convolutional, and maxpooling layers, and is an important tool in optimization (accelerating training).

For neural-fortran, it will mean that we will need to allow passing a batch of data to individual layer's forward and backward methods. While for dense and conv2d layers it may also mean an opportunity to numerically optimize the operations (e.g. running the same operation on a batch of data instead of over one sample at a time), for a batchnorm layer it is required because this layer evaluates moments (e.g. means and standard deviations) over a batch of inputs to normalize the input data.

Implementing batchnorm will require another non-trivial refactor like we did to enable generic optimizers. However, it will probably be easier. The first step will be to allow the passing of a batch of data to forward and backward methods, as I mentioned above. In other words, this snippet:

do concurrent(j = istart:iend)
call self % forward(input_data(:,j))
call self % backward(output_data(:,j))
end do

after the refactor, we should be able to write like this:

call self % forward(input_data(:,:))
call self % backward(output_data(:,:))

where the first dim corresponds to inputs and outputs in input and output layers, respectively, and the second dim corresponds to multiple samples in a batch. I will open a separate issue for this.

@Spnetic-5 given limited time in the remainder of the GSoC program, we may be unable to complete the batchnorm implementation, but we can make significant headway on it for sure.

References

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions