tag:github.com,2008:https://github.com/Coderx7/SimpleNet/releases
Release notes from SimpleNet
2023-02-15T18:43:07Z
tag:github.com,2008:Repository/85390040/v1.0.0-alpha.2
2023-04-28T16:19:34Z
Initial ImageNet pretrained weights
<p>Initial ImageNet pretrained weights for 1.5m, 3m, 5m and 9m variants can now be downloaded from the assets below.</p>
<h4>ImageNet Result:</h4>
<table>
<thead>
<tr>
<th align="left"><strong>Method</strong></th>
<th align="center"><strong>#Params</strong></th>
<th align="center"><strong>ImageNet</strong></th>
<th align="center"><strong>ImageNet-Real-Labels</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">SimpleNetV1_imagenet(36.23 MB)</td>
<td align="center">9.5m</td>
<td align="center">74.17/91.614</td>
<td align="center">81.24/94.63</td>
</tr>
<tr>
<td align="left">SimpleNetV1_imagenet(21.91 MB)</td>
<td align="center">5.7m</td>
<td align="center">71.936/90.3</td>
<td align="center">79.12/93.68</td>
</tr>
<tr>
<td align="left">SimpleNetV1_imagenet(12.52 MB)</td>
<td align="center">3m</td>
<td align="center">68.15/87.762</td>
<td align="center">75.66/91.80</td>
</tr>
<tr>
<td align="left">SimpleNetV1_imagenet(5.73 MB)</td>
<td align="center">1.5m</td>
<td align="center">61.524/83.43</td>
<td align="center">69.11/88.10</td>
</tr>
</tbody>
</table>
<h3>Note 1</h3>
<p>These models are converted from their Pytorch counterparts through onnx runtime.<br>
The respective models can be accessed from our Official Pytorch repository.</p>
<h3>Note 2</h3>
<p>Please note that since models are converted from onnx to caffe, the mean, std and crop ratio used are as follows:</p>
<div class="highlight highlight-source-python notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="DEFAULT_CROP_PCT = 0.875
IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)"><pre><span class="pl-c1">DEFAULT_CROP_PCT</span> <span class="pl-c1">=</span> <span class="pl-c1">0.875</span>
<span class="pl-c1">IMAGENET_DEFAULT_MEAN</span> <span class="pl-c1">=</span> (<span class="pl-c1">0.485</span>, <span class="pl-c1">0.456</span>, <span class="pl-c1">0.406</span>)
<span class="pl-c1">IMAGENET_DEFAULT_STD</span> <span class="pl-c1">=</span> (<span class="pl-c1">0.229</span>, <span class="pl-c1">0.224</span>, <span class="pl-c1">0.225</span>)</pre></div>
<p>Also note that images were not channel swapped during training so you dont need to do any channel swap either.<br>
You also DO NOT need to rescale the input to [0-255].</p>
Coderx7
tag:github.com,2008:Repository/85390040/v1.0.0-alpha
2023-04-28T16:19:54Z
Initial ImageNet Models
<p>Initial ImageNet models -1.5m, 3m and 5m models.</p>
Coderx7
tag:github.com,2008:Repository/85390040/v1.0.0
2023-04-14T15:10:54Z
ImageNet pretrained weights
<p>Initial ImageNet pretrained weights for 1.5m, 3m, 5m and 9m variants can now be downloaded from the assets below.</p>
<h3>m2 variants:</h3>
<table>
<thead>
<tr>
<th align="left"><strong>Method</strong></th>
<th align="center"><strong>#Params</strong></th>
<th align="center"><strong>ImageNet</strong></th>
<th align="center"><strong>ImageNet-Real-Labels</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">simplenetv1_9m_m2(36 MB)</td>
<td align="center">9.5m</td>
<td align="center">74.23/91.748</td>
<td align="center">81.22/94.756</td>
</tr>
<tr>
<td align="left">simplenetv1_5m_m2(22 MB)</td>
<td align="center">5.7m</td>
<td align="center">72.03/90.324</td>
<td align="center">79.328/93.714</td>
</tr>
<tr>
<td align="left">simplenetv1_small_m2_075(12 MB)</td>
<td align="center">3m</td>
<td align="center">68.506/88.15</td>
<td align="center">76.283/92.02</td>
</tr>
<tr>
<td align="left">simplenetv1_small_m2_05(5 MB)</td>
<td align="center">1.5m</td>
<td align="center">61.67/83.488</td>
<td align="center">69.31/ 88.195</td>
</tr>
</tbody>
</table>
<h3>m1 variants:</h3>
<table>
<thead>
<tr>
<th align="left"><strong>Method</strong></th>
<th align="center"><strong>#Params</strong></th>
<th align="center"><strong>ImageNet</strong></th>
<th align="center"><strong>ImageNet-Real-Labels</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">simplenetv1_9m_m1(36 MB)</td>
<td align="center">9.5m</td>
<td align="center">73.792/91.486</td>
<td align="center">81.196/94.512</td>
</tr>
<tr>
<td align="left">simplenetv1_5m_m1(21 MB)</td>
<td align="center">5.7m</td>
<td align="center">71.548/89.94</td>
<td align="center">79.076/93.36</td>
</tr>
<tr>
<td align="left">simplenetv1_small_m1_075(12 MB)</td>
<td align="center">3m</td>
<td align="center">67.784/87.718</td>
<td align="center">75.448/91.69</td>
</tr>
<tr>
<td align="left">simplenetv1_small_m1_05(5 MB)</td>
<td align="center">1.5m</td>
<td align="center">61.122/82.988</td>
<td align="center">68.58/87.64</td>
</tr>
</tbody>
</table>
<h3>Note 1</h3>
<p>These models are converted from their Pytorch counterparts through onnx runtime.<br>
The respective models can be accessed from our Official Pytorch repository.</p>
<h3>Note 2</h3>
<p>Please note that since models are converted from onnx to caffe, the mean, std and crop ratio used are as follows:</p>
<div class="highlight highlight-source-python notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="DEFAULT_CROP_PCT = 0.875
IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)"><pre><span class="pl-c1">DEFAULT_CROP_PCT</span> <span class="pl-c1">=</span> <span class="pl-c1">0.875</span>
<span class="pl-c1">IMAGENET_DEFAULT_MEAN</span> <span class="pl-c1">=</span> (<span class="pl-c1">0.485</span>, <span class="pl-c1">0.456</span>, <span class="pl-c1">0.406</span>)
<span class="pl-c1">IMAGENET_DEFAULT_STD</span> <span class="pl-c1">=</span> (<span class="pl-c1">0.229</span>, <span class="pl-c1">0.224</span>, <span class="pl-c1">0.225</span>)</pre></div>
<p>Also note that images were not channel swapped during training so you dont need to do any channel swap either.<br>
You also DO NOT need to rescale the input to [0-255].</p>
Coderx7