-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Human pose estimation models update #1718
Human pose estimation models update #1718
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please adopt existing demo to work with new models and old model (if you not going to deprecate old model)
1d638a7
to
8a43ca8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove empty sections from model descriptions.
models/intel/human-pose-estimation-0002/description/human-pose-estimation-0002.md
Outdated
Show resolved
Hide resolved
models/intel/human-pose-estimation-0003/description/human-pose-estimation-0003.md
Outdated
Show resolved
Hide resolved
models/intel/human-pose-estimation-0004/description/human-pose-estimation-0004.md
Outdated
Show resolved
Hide resolved
This PR introduces a new demo, so for sure it had no C++ postprocessing earlier. And now it supports both Python and C++ postprocessing (the same as in human-pose-estimation-3d demo), but results and runtime differ. I'm Ok with dropping C++ postprocessing at all, this is just for your reference if you feel unsatisfied with Python postprocessing performance. |
@druzhkov-paul I'm ok to have only python implementation of postprocessing for the first release of this demo. we may think about optimization later |
demos/python_demos/human_pose_estimation_demo/human_pose_estimation_demo/model.py
Outdated
Show resolved
Hide resolved
Ok. Removed C++ postprocessing for now. |
please add soft link to AC config from old configs location |
8aff9d6
to
38efe54
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the new demo to the demos root readme. The rest looks fine.
required=True, type=str) | ||
args.add_argument('-m', '--model', help='Required. Path to an .xml file with a trained model.', | ||
required=True, type=str) | ||
args.add_argument('--type', choices=('ae', 'openpose'), required=True, type=str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, rename this key to -at, --architecture_type
. (see here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Modes.USER_SPECIFIED: | ||
HPE(ie, args.model, target_size=args.tsize, device=args.device, plugin_config=config_user_specified, | ||
results=completed_request_results, max_num_requests=args.num_infer_requests, | ||
caught_exceptions=exceptions), | ||
Modes.MIN_LATENCY: | ||
HPE(ie, args.model, target_size=args.tsize, device=args.device.split(':')[-1].split(',')[0], | ||
plugin_config=config_min_latency, results=completed_request_results, max_num_requests=1, | ||
caught_exceptions=exceptions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Wovchena do we need two modes here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, but @druzhkov-paul was pointed at your version when it had this. That means to me it's our responsibility now to rewrite it later (that same is about -at, --architecture_type
and other things).
Co-authored-by: Eduard Zamaliev <eduard.zamaliev@intel.com>
@eizamaliev Ping? |
@Daniil-Osokin FYI |
@druzhkov-paul Sorry for the delay, it took awhile. Many thanks for making and sharing optimized post-processing! It really works faster in times, looks like you are vectorization jedi 😎 🤠 🦾. Very nice demo overall. And many thanks for the team (@eaidova @eizamaliev @IRDonch @vladimir-dudnik @Wovchena) for maintaining this awesome zoo 👍! |
This PR adds three new human-pose-estimation models. These models have architecture different from human-pose-estimation-0001, hence, a new Python demo and AC adapter has been added to support new model.
As for human-pose-estimation-0001, it's support has been added to the new Python demo as well. But I may suppose that a few clarifications are needed here:
OMZ already has two implementations of a postprocessing stage for human-pose-estimation-0001: the one in a C++ demo and another one in a dedicated AC adapter (which is in Python). I assume that these two implementations are equivalent in terms of results. Though due to performance concerns some parameters differ in an AC config vs C++ demo application: for example featuremaps' upscaling factor.
I've added an alternative Python implementation of postprocessing. It has a few advantages:
To compare the speed I've added C++ postprocessing with Python bindings (which is also used in human-pose-estimation-3d demo) support to the new demo application. Though I haven't exposed its selection to the demo's CLI. So it should be either made configurable, or removed at all. Let me know, what do you think about it.
AP on the test dataset:
TODOs: