Open
Description
I want to use this quantization platform to quantify Pointpillars and export the quantized parameters(int8 weights&bias). What should I do?
I downloaded the corresponding file from
files:
- name: pt_pointpillars_kitti_12000_100_11.2G_3.0
type: float & quantized
board: GPU
download link: https://www.xilinx.com/bin/public/openDownload?filename=pt_pointpillars_kitti_12000_100_11.2G_3.0.zip
checksum: 5ad7a30c21dc6c2041ff03c827fd53dc
However, .xmodel only has model topology and does not contain int8 parameters or quantization coefficients similar to DQD structure.
I think qat_ converted.pth is the parameter obtained after QAT, and then the quantization coefficient is required to obtain the corresponding int8 parameter.
Metadata
Metadata
Assignees
Labels
No labels