-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLOX End2end & Blade Support #66
Conversation
zxy seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
add unittest |
@@ -0,0 +1,259 @@ | |||
import argparse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add license
pass | ||
|
||
|
||
results = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
results
doesn't look like a global variable, it may be inside a function
|
||
|
||
def printStats(backend, timings, batch_size=1, model_name='default'): | ||
times = np.array(timings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add help document for this func
results = [] | ||
|
||
|
||
def printStats(backend, timings, batch_size=1, model_name='default'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the api name doesn't seem to match the actual action, it doesn't print
|
||
|
||
@torch.no_grad() | ||
def benchmark(model, inp, backend, batch_size, model_name='default'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add help document for this func
logging.warning(summary.to_markdown()) | ||
|
||
# x, y, z = inputs | ||
# inputs = (x.to(torch.int32), y.to(torch.int32), z.to(torch.int32)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove useless lines
|
||
|
||
if __name__ == '__main__': | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove main
Are there any tutorials introducing how to export with |
add pip requirements |
1 similar comment
add pip requirements |
|
||
@torch.jit.script | ||
class DetPostProcess: | ||
"""Process output values of detection models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, Is the output here done after the nms
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, nms has been done in the forward function.
postprocess_fn=DetPostProcess() # `DetPostProcess` refer to ev_torch.apis.export.DetPostProcess | ||
trace_model=True) | ||
|
||
model_script = torch.jit.script(end2end_model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use torch.jit.trace
mechanism to export an End2End model here, or do you think the inference time will be slower if the script mechanism is used here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We support to export an End2End model with both the preprocess and postprocess procedure by torch.jit.script, which can not be replaced by the torch.jit.trace since the input must be a tensor. You can also export a model with ''end2end=False'' that use trace to export a model and it is little bit faster.
update master.
easycv/models/utils/dist_utils.py
Outdated
try: | ||
from torch.distributed import ReduceOp | ||
except ImportError: | ||
raise ImportError('Blade does not support ReduceOp') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you use blade, it will definitely raise error here, the ReduceOp should be encapsulated within the calling api, Please confirm the compatibility of this api.
stat, output = subprocess.getstatusoutput( | ||
f'python tools/export.py {config_file} {ori_ckpt} {ckpt_path}') | ||
self.assertTrue(stat == 0, 'export model failed') | ||
if stat != 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if stat != 0
should be moved before self.assertTrue, otherwise the error log will not print if the assert fails
tests/toolkit/time_cost.py
Outdated
@@ -0,0 +1,72 @@ | |||
import timeit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move to easycv/utils
from numpy.testing import assert_array_almost_equal | ||
|
||
|
||
@unittest.skipIf(torch.__version__ == '1.8.0', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be @unittest.skipIf(env != blade env)
self.assertTrue(stat == 0, 'export model failed') | ||
|
||
def test_export_yolox(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all cases should add tests, such as:
export = dict(use_jit=True, export_blade=False, end2end=False)
export = dict(use_jit=False, export_blade=True, end2end=False)
export = dict(use_jit=False, export_blade=True, end2end=True)
export = dict(use_jit=True, export_blade=True, end2end=True)
easycv/apis/export.py
Outdated
# well trained model will generate reasonable result, otherwise, we should change model.test_conf=0.0 to avoid tensor in inference to be empty | ||
# use trace is a litter bit faster than script. But it is not supported in an end2end model. | ||
if end2end: | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try except seems redundant
easycv/apis/export.py
Outdated
print('test_pipeline not found, using default preprocessing!') | ||
raise ValueError('export model config without test_pipeline') | ||
|
||
target_size, keep_ratio, pad_val, mean, std, to_rgb = parse_pipleline( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove parse_pipleline
, user's pipeline is varied, and this should be performed according to the user's behavior.
easycv/apis/export.py
Outdated
print(test_pipeline) | ||
else: | ||
print('test_pipeline not found, using default preprocessing!') | ||
raise ValueError('export model config without test_pipeline') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not hasattr(cfg, 'test_pipeline'),use default process, and add a print log?
from easycv.utils.test_util import clean_up, get_tmp_dir | ||
|
||
|
||
@unittest.skipIf(torch.__version__ != '1.8.1+cu102', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
skipif should only apply to unittest with blade?
from numpy.testing import assert_array_almost_equal | ||
|
||
|
||
@unittest.skipIf(torch.__version__ != '1.8.1+cu102', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
skipif should only apply to unittest with blade?
easycv/apis/export.py
Outdated
|
||
input = 255 * torch.rand((batch_size, 3) + img_scale) | ||
|
||
if hasattr(cfg, 'test_pipeline'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify the scene first, optimize the plan later.
When exporting, you don't need to care about the user's test_pipeline
config, do the following simplification:
-
end2end: use the default process and postprocess apis
-
not end2end: directly export the model without preprocess and postprocess
|
||
You should define your own preprocess and postprocess as below or the default test pipeline will be used. | ||
|
||
```python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function is ok, no need to use class
for idx, img in enumerate(input_data_list): | ||
if type(img) is not np.ndarray: | ||
img = np.asarray(img) | ||
img = preprocess(img) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how to define my process fn?
docs/source/tutorials/export.md
Outdated
#### Non-End2end model | ||
|
||
```python | ||
input_data_list = [np.asarray(Image.open(img))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
image_path = 'data/demo.jpg'
input_data_list =[np.asarray(Image.open(image_path))]
docs/source/tutorials/export.md
Outdated
|
||
|
||
```python | ||
input_data_list = [np.asarray(Image.open(img))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
image_path = 'data/demo.jpg'
input_data_list =[np.asarray(Image.open(image_path))]
Wrap the export process with End2endModelExportWrapper. We support the export version of 'jit' and 'blade'.
One can choose whether to end2end export a model with a little bit more time cost.
For more details of blade, you can refer to https://help.aliyun.com/document_detail/205134.html