Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNN example #27

Open
JNaranjo-Alcazar opened this issue Jan 5, 2023 · 3 comments
Open

CNN example #27

JNaranjo-Alcazar opened this issue Jan 5, 2023 · 3 comments

Comments

@JNaranjo-Alcazar
Copy link

Hi community,
excellent work. I am curious if this activation layer can be applied in a CNN network. Is there any example similar to this https://www.nengo.ai/keras-spiking/examples/spiking-fashion-mnist.html but using CNNs?

The input to the network would be of (T,28,28,1) dimensionality if we follow the previous example. How would the network have to be modified?

Best regards

@drasmuss
Copy link
Member

drasmuss commented Jan 5, 2023

Yes, the spiking activation layers can be used with any other layer types. In that example you linked you could replace the Dense layers with Conv2D layers (and remove the Flatten/Reshape layers, as they are no longer needed). Or you could just take any Keras CNN example, and use SpikingActivation layers instead of the standard activations.

@JNaranjo-Alcazar
Copy link
Author

JNaranjo-Alcazar commented Jan 5, 2023

Okey, thanks for the quick answer.
My question is the following, I have implemented a CNN with Conv2D layers inside TimeDistributed followed by Pooling layers inside TimeDistributed. As you can see, my image is 64 x 50 and it is repeated 10 times as explained in the MNIST manual. How do I do the last pooling before the classification layer? I have solved this with a Flatten layer but I think it is not the right thing to do. Any suggestions?

Find the architecture below:

model = tf.keras.Sequential(
        [

          TimeDistributed(Conv2D(32, 3, padding='same', input_shape=(10, 64, 50, 1))),
          keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
          TimeDistributed(MaxPool2D(pool_size=2)),
          
          TimeDistributed(Conv2D(64, 3, padding='same')),
          keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
          TimeDistributed(MaxPool2D(pool_size=2)),
          
          TimeDistributed(Conv2D(128, 3, padding='same')),
          keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
          TimeDistributed(MaxPool2D(pool_size=2)),
          
          TimeDistributed(Conv2D(256, 3, padding='same')),
          keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
          TimeDistributed(MaxPool2D(pool_size=2)),
          
          Flatten(),
          Dense(10)
        ]

@JNaranjo-Alcazar
Copy link
Author

Hi, I manage to train the CNN I explained in the previous comment.
However, when launching check_output I had to change this line:

if has_global_average_pooling:
        # check test accuracy using average output over all timesteps
        predictions = np.argmax(output.mean(axis=1), axis=-1)
    else:
        # check test accuracy using output from only the last timestep
        # predictions = np.argmax(output[:, -1], axis=-1) # ORIGINAL
        predictions = np.argmax(output, axis=-1) # MY LINE

output had a size of (N_SAMPLES, 10) so I had to do this modification to get the label information.

I do not understand why I had to make this change. I the shape of output correct?

Does the modification make sense?
Thanks for the support

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants