Experiment with Neural Network architectures to build and evaluate both single and multi-layer sequential models in Keras
-
Updated
Mar 17, 2020 - Jupyter Notebook
Experiment with Neural Network architectures to build and evaluate both single and multi-layer sequential models in Keras
Various tasks related to graphs and networks theory
Identifying Image Orientation using Supervised Machine Learning Models of k-Nearest Neighbors, Adaboost and Multi-Layer Feed-Forward Neural Network trained using Back-Propagation Learning Algorithm
Code used for the paper "Node and layer eigenvector centralities for multiplex networks", Tudisco, Arrigo, Gautier, SIAM J. Appl. Math 2017
Notes, tutorials, code snippets and templates focused on NNs for Machine Learning
The main goal of this work is to build and train multilayer NNs, train autoencoders to reduce the number of features for the classifiers and build and train deep networks (CNN and LSTM) for predicting or detecting the seizures.
Implementation of papers: Rádli, R., & Czúni, L.: Deep Randomized Networks for Fast Learning (2023), Iteratively increasing randomized networks (2024)
Lightweight multi-layer neural network library implemented in pure javascript.
Example output from my multiplex visualisation tool
Code used for the paper "Multi-dimensional HITS: An always computable ranking for temporal multi-layer directed networks", Arrigo, Tudisco, 2018
Classification is made in 2-dimensional space with artificial neural networks learning rules. Perceptron and Delta learning rules are implemented in different layers.
Add a description, image, and links to the multi-layer-networks topic page so that developers can more easily learn about it.
To associate your repository with the multi-layer-networks topic, visit your repo's landing page and select "manage topics."