You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 22%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq
pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_dea
dline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tp
r_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel
_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_w
indow hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_c
lear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Srbds: Not affected
Tsx async abort: Not affected
Performance issue description
The MatMul ops in the static version of this model are extremely slow compared to the dynamic version. The linked model is dynamic for easy conversion to static dimensions.
I converted this model into OV format with this command: ovc <model_file> --compress_to_fp16 false. I then renamed the model and ran the same command, but with the --input [1,3,240,240] flag. Running benchmark_app for both OV models, I get very different results:
Static
Command line parameters
-d;GPU.0
-m;vit-plus-dyn-1.xml
-pc;
-report_type;detailed_counters
-data_shape;[1,3,240,240]
Configuration setup
topology;Model1
target device;GPU.0
API;async
inference_only;True
precision;UNSPECIFIED
batch size;1
number of iterations;None
number of parallel infer requests;4
duration (ms);60000
Execution results
read model time (ms);34.44
compile model time (ms);1027.40
first inference time (ms);109.96
total execution time (ms);60384.59
total number of iterations;692
latency (ms);348.69
avg latency;348.07
min latency;177.24
max latency;371.51
throughput;11.46
Dynamic
Command line parameters
-d;GPU.0
-m;vit-plus-dyn.xml
-pc;
-report_type;detailed_counters
-data_shape;[1,3,240,240]
Configuration setup
topology;Model1
target device;GPU.0
API;async
inference_only;False
precision;UNSPECIFIED
batch size;1
number of iterations;None
number of parallel infer requests;4
duration (ms);60000
Execution results
read model time (ms);46.83
compile model time (ms);4733.11
first inference time (ms);344.01
total execution time (ms);60084.50
total number of iterations;2616
latency (ms);89.58
avg latency;91.69
min latency;51.45
max latency;167.14
throughput;43.54
The dynamic model has nearly 4x lower latency and nearly 4x higher throughput. The higher throughput could make sense if it had batched inputs, but the inputs were [1,3,240,240] in both cases. The latency should also be higher, not lower, in the dynamic model.
Looking at the individual kernel times, these ops stand out:
I wondered if the 4x difference could be related to the line number of parallel infer requests;4, but adding -nireq 1 and re-running didn't change the relative gap between the two:
Static
[ INFO ] Execution Devices:['GPU.0']
[ INFO ] Count: 610 iterations
[ INFO ] Duration: 60156.68 ms
[ INFO ] Latency:
[ INFO ] Median: 98.04 ms
[ INFO ] Average: 98.43 ms
[ INFO ] Min: 95.16 ms
[ INFO ] Max: 109.08 ms
[ INFO ] Throughput: 10.14 FPS
Dynamic
[ INFO ] Execution Devices:['GPU.0']
[ INFO ] Count: 2307 iterations
[ INFO ] Duration: 60036.19 ms
[ INFO ] Latency:
[ INFO ] Median: 25.56 ms
[ INFO ] Average: 25.83 ms
[ INFO ] Min: 23.67 ms
[ INFO ] Max: 45.64 ms
[ INFO ] Throughput: 38.43 FPS
Step-by-step reproduction
Download the provided ONNX model
Convert to OV format with ovc <model_file> --compress_to_fp16 false
Rename ONNX model to avoid overwriting the OV model
Convert again to OV format with ovc <model_file> --compress_to_fp16 false --input [1,3,240,240]
Run benchmark_app -d GPU.0 -m <model_file.xml> -pc -report_type detailed_counters -data_shape [1,3,240,240] for both models and compare results
Issue submission checklist
I'm reporting a performance issue. It's not a question.
I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
There is reproducer code and related data files such as images, videos, models, etc.
The text was updated successfully, but these errors were encountered:
OpenVINO Version
tag 2024.1.0
Operating System
Debian Bookworm (with latest
intel-opencl-icd
:24.22.29735.21-1
)Device used for inference
iGPU
OpenVINO installation
PyPi
Programming Language
Python
Hardware Architecture
x86 (64 bits)
Model used
https://huggingface.co/immich-app/ViT-B-16-plus-240__laion400m_e32/resolve/6e7e12096396bc9280b6db412ac30eef0a9cbe3e/visual/vit-plus-dyn-sim.onnx
Model quantization
No
Target Platform
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 22%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq
pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_dea
dline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tp
r_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel
_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_w
indow hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_c
lear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Srbds: Not affected
Tsx async abort: Not affected
Performance issue description
The MatMul ops in the static version of this model are extremely slow compared to the dynamic version. The linked model is dynamic for easy conversion to static dimensions.
I converted this model into OV format with this command:
ovc <model_file> --compress_to_fp16 false
. I then renamed the model and ran the same command, but with the--input [1,3,240,240]
flag. Runningbenchmark_app
for both OV models, I get very different results:Static
Dynamic
The dynamic model has nearly 4x lower latency and nearly 4x higher throughput. The higher throughput could make sense if it had batched inputs, but the inputs were
[1,3,240,240]
in both cases. The latency should also be higher, not lower, in the dynamic model.Looking at the individual kernel times, these ops stand out:
Static
Dynamic
I wondered if the 4x difference could be related to the line
number of parallel infer requests;4
, but adding-nireq 1
and re-running didn't change the relative gap between the two:Static
Dynamic
Step-by-step reproduction
ovc <model_file> --compress_to_fp16 false
ovc <model_file> --compress_to_fp16 false --input [1,3,240,240]
benchmark_app -d GPU.0 -m <model_file.xml> -pc -report_type detailed_counters -data_shape [1,3,240,240]
for both models and compare resultsIssue submission checklist
The text was updated successfully, but these errors were encountered: