Outputs of network different between runnning on AI-deck and as quantized network in tflite #391
Description
Hi I have encountered a problem while working with nn-tools for GAP8 (AI-deck to be more specific). Please see details below.
The problem:
Outputs from the network seem very different between running it in tensorflow (tf-lite after quantization) and on the actual AI deck. My neural networks takes two inputs: a stack of three grey-scale images and a stack of three states [velocities quaternions] array. Despite making inputs exactly the same (0s for input_1 and 180s for input_2) both for AI-deck and tflite, the outputs are very different (as shown below):
Tflite:
[[-28 28 123]]
AI-deck:
171 16 61
I could understand if there is a small discrepancy between the two but here the results are completely different. I am using GAP-sdk V4.12.0.
What I tried
- The network uses transpose in the middle and I tried playing with it (removing etc) to see if I could make the outputs match - no luck
- I tried disabling and reenabling ‘adjust’ option (while taking care to make input shape appropriate) for nn-tools - no luck
- I tired adding softmax at the end of my network - no luck
- I tried defining outputs as different types - no luck
I have attached link (below) to minimal example for the AI-deck with instructions on how to run it should you wish to check it (minimal_network_code.c is the main entry point) as well as minimum working example for execution of the tflite network using python. Also, I have included my network as quantized tflite file (inside GAP8 network test repository in model/test_model.tflite).
Python script to check tflite performance:
import tensorflow as tf
import numpy as np
model_path = 'test_model.tflite'
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_data_1=(np.zeros([1,84,84,3])).astype('uint8')
input_data_2 = (np.zeros([1,3,7]) + 180).astype('uint8')
interpreter.set_tensor(input_details[0]['index'], input_data_1)
interpreter.set_tensor(input_details[1]['index'], input_data_2)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print('quantized output')
print(output_data)
AI-deck side code:
https://github.com/pladosz/GAP8_network_test/
I suspect it is some sort of output interpreted as wrong type problem, but I am not sure.
Many Thanks,