-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the parameters #3
Comments
Hello! The unusual value for the batch size aimed only to speed up the training process, I left it like that but you should be free to change it affecting only the training time. The z_size is actually the multiplication between the window size = 12 and the latent size = 10, and it indeed corresponds to the actual dimension of the latent space. I did not implement the experiments regarding the down-sampling, sorry. |
OK, thanks for your help. |
Hello, I have another question. I notice that you use the windows_normal_test+windows_attack to be the test_loader, but in the SWaT dataset, it has Normal/Attack label and you drop it, and not all the windows_attack data is abnormal. Thanks for your answer. |
Hello again :) |
Hi, I have just read the paper and studied your code, then I found out the question. Until now I still not try to do it. I'm looking forward to your correction and I will try to do it now. |
I also notice this issue. and I try to do it.but the permence of model is very pool.I am confused about it. |
My implementation is here |
Hi @severous what do you mean by permence? |
I have the same problem why all attack windows are abnormal? |
This problem has been solved in this implementation. Thanks |
Hello, thanks for your reply! I use the new code to do anomaly detection but I get bad results in f1_score. It' performance is not as good as the original paper. |
Hi,
|
I have solved this problem!The original paper selected the best F1_score, I use this way to achieve a similar result in SWaT. Thanks! |
@meihuameii Hello, can you explain how to achieve a similar result in SWaT? |
Yet I select the best F1_score, it is about 74%, which is still far below that in the paper |
Hi,
Then I increased threshold = 10
y_pred_ = np.zeros(y_pred.shape[0])
y_pred_[y_pred >= threshold] = 1 Classification report:
|
Hi,
and did you mean use Im my case, i got |
@soemthlng Yes, I increased
How did you select |
@finloop This is my code.
I have 2 questions.
|
You can check out the whole USAD.ipynb file. I think I found the issue:This line in your code could cause the divergence in scores: y_test=np.concatenate([np.zeros(windows_normal_test.shape[0]), np.ones(windows_attack.shape[0])]) This line creates a long array of zeros and ones like [0,0,0,0 ... 1,1,1,1], it doesn't take into account that not all windows in This is what This is what I think your #threshold=10
y_test=np.concatenate([np.ones(windows_attack.shape[0])])
plt.plot(y_test)
plt.ylim([0,1.5])
print(sklearn.metrics.classification_report(y_test, y_pred_))
Could you show me your notebook? |
I run this code in 'Ununtu 16.04 Server, so I do not have notebook.This is my code of
Did not modify the y_test mean that you use this? |
Yes :)
Sure. Here you go: Link to notebook. https://github.com/finloop/usad/blob/dev/USAD.ipynb |
@finloop this is
I think |
Yes It is what we want.
What do you mean by |
I mean Previously, you said that you got results similar to the results presented in the paper. |
No. The f1, recall etc. was based on class |
Hello, have you reproduced the results from the original paper (F1 result without point-adjust on the SWAT dataset is 0.7917). I've been very distressed recently that I can't reproduce the results from my original paper. In my code, BATCH_ SIZE = 1024, N_ EPOCHS = 100, hidden_ Size = 20, window_ Size=10. Standard Scaler () or MinMaxScaler () were used for data preprocessing, and downsampling was used in the code, with a downsampling rate of 5. The best results from these methods were only about 0.74. Do you have any skills to make the results similar to those of the original paper? I'd love to wait for your reply! Thank you very much. |
Hello, Thanks! |
Hello, sorry to bother you, may I ask why you set the threshold to 0, after I set the result 1.0 acc value is 0 |
A window is considered anomalous if it contains at least one anomaly. Checkout my notebook: https://github.com/finloop/usad/blob/dev/USAD.ipynb |
Thank you for your answer, before I set the wrong data normalization to MinMaxScaler, after the change the accuracy increased a lot |
Thanks. How to get the best result on wadi dataset? |
I have the same questions too. I will check out the new codes which was uploaded by @finloop https://github.com/finloop/usad/blob/dev/USAD.ipynb |
Hello, can I ask why you set the batch_size=7919, and whether the z_size means latent space or not? Did you use the down-sampling rate mentioned in the article USAD? Thanks for your answer.
The text was updated successfully, but these errors were encountered: