Open
Description
As these are all relatively minor/easily solved issues, and to save myself some time, I've chosen to file this as a single issue - let me know if this should be broken out though.
There are a few things that make GroundBasedPeopleDetectionApp unusable in some circumstances, and also just very hard to discover how to use in others:
- No documentation on what format it expects the point cloud in. Indeed, it expects x to increase rightwards, y to increase downwards (this was the confusing part) and z to increase outwards (I think - it might not matter).
- It also expects the cloud points to be arranged in a width x height arrangement so that it can later extract a coloured image.
- It would be useful to also document the expected arrangement when 'vertical' is true
- setPersonClusterLimits takes variables called min/max width/height to calculate min/max points for Euclidean clustering, but the algorithm it uses to determine how many points this is isn't documented. It doesn't really correspond to min/max width/height of a human though, despite what the documentation says. This ought to be fleshed out, or perhaps just less overloaded in use (the min/max height does get used as person height later, though it doesn't correspond to height unless that person is standing straight).
- setDimensionLimits is never called on the HeadBasedSubclustering object, so if your cloud happens to be of a configuration where people have more than 5000 points (or fewer than 30), you're out of luck.
- Cluster tolerance is 2 * voxel size, rather than a configurable tolerance or ratio. For the default 6cm voxel size, this is a very high tolerance. It would seem to me that this variable isn't really linked with voxel size.