Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid dists #9

Open
nrpr93 opened this issue Sep 4, 2017 · 15 comments
Open

Invalid dists #9

nrpr93 opened this issue Sep 4, 2017 · 15 comments

Comments

@nrpr93
Copy link

nrpr93 commented Sep 4, 2017

Hi,

I get NaN dists for some reason. The scripts can´t read all the annotations and he can get all the information in there, but we can´t give me the dists correctly.
Can someone help me with this?
nandists

Regards,
NR

@TheMikeyR
Copy link

@nrpr93 I've experienced the same issue regarding getting NaN dists, I solved mine by reducing the clusters 1 or 3 instead of 5.

@nrpr93
Copy link
Author

nrpr93 commented Sep 6, 2017

Now it´s work with one cluster, but it just run two iterations, and the final anchor doesn´t seem very right. I assume that´s because the number of iterations.

@TheMikeyR
Copy link

Are you testing with the voc data?

@nrpr93
Copy link
Author

nrpr93 commented Sep 6, 2017

No, I´m testing with my own data.

@TheMikeyR
Copy link

Maybe there are some issues with your own data, you could try to do the voc example in the readme and see if you can reproduce the same results.

Could you tell me about your data and how it is formatted?

@nrpr93
Copy link
Author

nrpr93 commented Sep 6, 2017

The format of my data it´s pretty simple I have 15 classes, and I use the same tpe of annotation that yolo wants: , my 'width' and 'height' it´s always the same for all the images, just the 'x' and 'y' change in all.

0 0.53 0.35 0.0625 0.08333333333333333
1 0.53 0.490909090909 0.0625 0.08333333333333333
2 0.476666666667 0.486363636364 0.0625 0.08333333333333333
3 0.583333333333 0.490909090909 0.0625 0.08333333333333333
4 0.453333333333 0.604545454545 0.0625 0.08333333333333333
5 0.61 0.604545454545 0.0625 0.08333333333333333
6 0.446666666667 0.731818181818 0.0625 0.08333333333333333
7 0.613333333333 0.75 0.0625 0.08333333333333333
8 0.53 0.581818181818 0.0625 0.08333333333333333
9 0.493333333333 0.672727272727 0.0625 0.08333333333333333
10 0.563333333333 0.677272727273 0.0625 0.08333333333333333
11 0.48 0.877272727273 0.0625 0.08333333333333333
12 0.566666666667 0.877272727273 0.0625 0.08333333333333333
13 0.47 1.05 0.0625 0.08333333333333333
14 0.57 1.04545454545 0.0625 0.08333333333333333

@TheMikeyR
Copy link

How much data do you have for each class ?

@TheMikeyR
Copy link

Is that all the bounding boxes you have? @nrpr93 Because that is probably your issue. I've used ~60k frames with ~180k detections to find my anchor boxes, so I believe the program can't generate anchor boxes if you only have one detection per class, you need to collect a bigger dataset if that is the case.

@nrpr93
Copy link
Author

nrpr93 commented Sep 6, 2017

Of course I don´t have just one detection for class, I have 18k for each class. I just show a example of a file with 15 detetion.

@TheMikeyR
Copy link

Okay sorry, I don't know the issue then, maybe @Jumabek can help?

@Jumabek
Copy link
Owner

Jumabek commented Oct 23, 2017

Hi @nrpr93 , could you upload your annotation file here (One would be sufficient). I want to test your format (I suspect it has to do some trailing white spaces or something)

if you cannot update then please email jumabek4044@gmail.com

@Jumabek
Copy link
Owner

Jumabek commented Oct 23, 2017

Sorry I just read that you guyz are getting NaN. @TheMikeyR, @nrpr93 can you send me your annotations. No need for the images, just annotation. I want to fix the code using your annotations if you do not mind.
BTW sorry I am replying late, I was on a vocation

@nrpr93
Copy link
Author

nrpr93 commented Oct 23, 2017

I believe the problem it´s because my width and height in all objects are the same. So that´s a division calulation problem.

Here´s one example of my annotation:
0 0.493750 0.325000 0.09 0.9
1 0.490625 0.450000 0.09 0.9
2 0.440625 0.450000 0.09 0.9
3 0.540625 0.450000 0.09 0.9
4 0.425000 0.562500 0.09 0.9
5 0.562500 0.566667 0.09 0.9
6 0.418750 0.687500 0.09 0.9
7 0.559375 0.687500 0.09 0.9
8 0.493750 0.537500 0.09 0.9
9 0.465625 0.625000 0.09 0.9
10 0.525000 0.625000 0.09 0.9
11 0.453125 0.804167 0.09 0.9
12 0.528125 0.804167 0.09 0.9
13 0.440625 0.966667 0.09 0.9
14 0.534375 0.954167 0.09 0.9
1.txt

@TheMikeyR
Copy link

@Jumabek don't worry I believe my error is due to the cluster size was too high compared to the size of my annotations ~100x100 so I couldn't go up to 5 in clusters, but I had no issues using voc data, if you still want my data just give me a thumps up and I will send it 😄

@Jumabek
Copy link
Owner

Jumabek commented Nov 2, 2017

@nrpr93 now I see it.
It seems all of your object have the same width and height (0.09,0.9).
purpose of generating anchors (a.k.a clustering) is to group the objects that are closer to each other in terms of their shape).

Since in your case you have same shape but different class objects, you should choose only one cluster.

if the code fails for some other reason here I calculate the centroid for you:

input_height = 416; //height of your network input
input_width = 416; //width of your net input, depending on you it could be different
downsampling_factor = 32;

//then you will have 1 cluster/anchor which is 
anchor_w = 0.09*input_weight/downsampling_factor
anchor_h = 0.9*input_height/downsampling_factor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants