-
Notifications
You must be signed in to change notification settings - Fork 12
/
index.html
237 lines (215 loc) · 26 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>README.md</title>
<link rel="stylesheet" href="https://stackedit.io/style.css" />
</head>
<body class="stackedit">
<div class="stackedit__html"><h1 id="mobilenet">02. MobileNet</h1>
<h2 id="assignment">Assignment</h2>
<ol>
<li>“Each” batch is supposed to collect 1000 images for the classes mentioned above:
<ol>
<li>Google Drive Link had been emailed to you (one person in every group, you can add rest) already</li>
<li>When you search, make sure to add some country names like “Small Quadcopter India”. You cannot use these 3 names “USA”, “India” and “China”. We are trying to avoid downloading the same files.</li>
<li>In all of these images, the objects MUST be flying and not on the ground (so no product images or them on the ground)</li>
<li>Don’t use Google Images only, use Flickr, Bing, Yahoo, DuckDuckGo</li>
<li>Do not rename the files, as we’d like to know if the same files are there.</li>
<li>You need to add 1000 images to the respective folders on Google Drive <strong>before Wednesday Noon</strong>.</li>
</ol>
</li>
<li>Train (transfer learning) MobileNet-V2 on a custom dataset of 31000 images:
<ol>
<li>21k train, 10k test, remove duplicated (same names)</li>
<li>with 4 classes Small 4Copter, Large 4Copter, Winged Drones and Flying Birds</li>
<li>model is trained on 224x224, images you’ll download are not 224x224. Think and implement the best strategy</li>
<li>Upload the model on Lambda, and keep it ready for future use (use the same S3 bucket).</li>
</ol>
</li>
</ol>
<h2 id="solution">Solution</h2>
<p><strong>Deployed:</strong> <a href="https://floating-refuge-59093.herokuapp.com/">https://floating-refuge-59093.herokuapp.com/</a><br>
Look at bottom of this file for how i deployed to Heroku</p>
<p>Creation of IFO Dataset</p>
<p>Dataset Link: <a href="https://drive.google.com/file/d/1LXbEadbpuvTJVwj5thGrlMjYP9hzt6Wn/view?usp=sharing">https://drive.google.com/file/d/1LXbEadbpuvTJVwj5thGrlMjYP9hzt6Wn/view?usp=sharing</a></p>
<p>PreProcess raw dataset: <a href="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/IFO_preprocess.ipynb">https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/IFO_preprocess.ipynb</a></p>
<p>DatasetVisualization: <a href="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/01_IFODataset.ipynb">https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/01_IFODataset.ipynb</a></p>
<p>Training: <a href="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/02_Training.ipynb">https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/02_Training.ipynb</a></p>
<p>Misclassifications: <a href="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/03_Misclassifications.ipynb">https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/03_Misclassifications.ipynb</a></p>
<h3 id="dataset">Dataset</h3>
<pre class=" language-python"><code class="prism language-python"><span class="token keyword">class</span> <span class="token class-name">IFODataset</span><span class="token punctuation">(</span>Dataset<span class="token punctuation">)</span><span class="token punctuation">:</span>
<span class="token triple-quoted-string string">"""
Dataset generator for MobileNetV2 implementation on Identified
flying objects dataset
"""</span>
class_names <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'Flying_Birds'</span><span class="token punctuation">,</span> <span class="token string">'Large_QuadCopters'</span><span class="token punctuation">,</span> <span class="token string">'Small_QuadCopters'</span><span class="token punctuation">,</span> <span class="token string">'Winged_Drones'</span><span class="token punctuation">]</span>
<span class="token keyword">def</span> <span class="token function">__init__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> root<span class="token punctuation">,</span> source_zipfile<span class="token punctuation">,</span> transform<span class="token operator">=</span><span class="token boolean">None</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
self<span class="token punctuation">.</span>root <span class="token operator">=</span> Path<span class="token punctuation">(</span>root<span class="token punctuation">)</span> <span class="token operator">/</span> <span class="token string">'IFO'</span>
self<span class="token punctuation">.</span>root<span class="token punctuation">.</span>mkdir<span class="token punctuation">(</span>parents<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">,</span> exist_ok<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
self<span class="token punctuation">.</span>source_zipfile <span class="token operator">=</span> Path<span class="token punctuation">(</span>source_zipfile<span class="token punctuation">)</span>
self<span class="token punctuation">.</span>transform <span class="token operator">=</span> transform
<span class="token keyword">if</span> os<span class="token punctuation">.</span>path<span class="token punctuation">.</span>isdir<span class="token punctuation">(</span>self<span class="token punctuation">.</span>root <span class="token operator">/</span> <span class="token string">'IFOCleaned'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>f<span class="token string">"dataset folder/files already exist in {self.root / 'IFOCleaned'}"</span><span class="token punctuation">)</span>
<span class="token keyword">else</span><span class="token punctuation">:</span>
self<span class="token punctuation">.</span>extractall<span class="token punctuation">(</span><span class="token punctuation">)</span>
self<span class="token punctuation">.</span>images_paths <span class="token operator">=</span> <span class="token builtin">sorted</span><span class="token punctuation">(</span><span class="token builtin">list</span><span class="token punctuation">(</span>Path<span class="token punctuation">(</span>self<span class="token punctuation">.</span>root <span class="token operator">/</span> <span class="token string">'IFOCleaned'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>glob<span class="token punctuation">(</span><span class="token string">'*/*.jpg'</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
self<span class="token punctuation">.</span>targets <span class="token operator">=</span> <span class="token punctuation">[</span>self<span class="token punctuation">.</span>class_names<span class="token punctuation">.</span>index<span class="token punctuation">(</span>image_path<span class="token punctuation">.</span>parent<span class="token punctuation">.</span>name<span class="token punctuation">)</span> <span class="token keyword">for</span> image_path <span class="token keyword">in</span> self<span class="token punctuation">.</span>images_paths<span class="token punctuation">]</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>f<span class="token string">'found {len(self.images_paths)} images in total'</span><span class="token punctuation">)</span>
l <span class="token operator">=</span> <span class="token builtin">list</span><span class="token punctuation">(</span>dataset<span class="token punctuation">.</span>targets<span class="token punctuation">)</span>
images_per_class <span class="token operator">=</span> <span class="token builtin">dict</span><span class="token punctuation">(</span><span class="token punctuation">(</span>dataset<span class="token punctuation">.</span>class_names<span class="token punctuation">[</span>x<span class="token punctuation">]</span><span class="token punctuation">,</span>l<span class="token punctuation">.</span>count<span class="token punctuation">(</span>x<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">for</span> x <span class="token keyword">in</span> <span class="token builtin">set</span><span class="token punctuation">(</span>l<span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>json<span class="token punctuation">.</span>dumps<span class="token punctuation">(</span>images_per_class<span class="token punctuation">,</span> indent<span class="token operator">=</span><span class="token number">4</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token comment"># split indices to train and test, use stratify to distribute equally</span>
self<span class="token punctuation">.</span>train_idxs<span class="token punctuation">,</span> self<span class="token punctuation">.</span>test_idxs <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span>np<span class="token punctuation">.</span>arange<span class="token punctuation">(</span><span class="token builtin">len</span><span class="token punctuation">(</span>self<span class="token punctuation">.</span>images_paths<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">,</span> test_size<span class="token operator">=</span><span class="token number">0.3</span><span class="token punctuation">,</span> shuffle<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">,</span> stratify<span class="token operator">=</span>self<span class="token punctuation">.</span>targets<span class="token punctuation">)</span>
<span class="token keyword">def</span> <span class="token function">extractall</span><span class="token punctuation">(</span>self<span class="token punctuation">)</span><span class="token punctuation">:</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'Extracting the dataset zip file'</span><span class="token punctuation">)</span>
zipf <span class="token operator">=</span> ZipFile<span class="token punctuation">(</span>self<span class="token punctuation">.</span>source_zipfile<span class="token punctuation">,</span> <span class="token string">'r'</span><span class="token punctuation">)</span>
zipf<span class="token punctuation">.</span>extractall<span class="token punctuation">(</span>self<span class="token punctuation">.</span>root<span class="token punctuation">)</span>
<span class="token keyword">def</span> <span class="token function">split_dataset</span><span class="token punctuation">(</span>self<span class="token punctuation">)</span><span class="token punctuation">:</span>
<span class="token keyword">return</span> Subset<span class="token punctuation">(</span>self<span class="token punctuation">,</span> indices<span class="token operator">=</span>self<span class="token punctuation">.</span>train_idxs<span class="token punctuation">)</span><span class="token punctuation">,</span> Subset<span class="token punctuation">(</span>self<span class="token punctuation">,</span> self<span class="token punctuation">.</span>test_idxs<span class="token punctuation">)</span>
<span class="token keyword">def</span> <span class="token function">__len__</span><span class="token punctuation">(</span>self<span class="token punctuation">)</span><span class="token punctuation">:</span>
<span class="token keyword">return</span> <span class="token builtin">len</span><span class="token punctuation">(</span>self<span class="token punctuation">.</span>images_paths<span class="token punctuation">)</span>
<span class="token keyword">def</span> <span class="token function">__getitem__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> index<span class="token punctuation">)</span><span class="token punctuation">:</span>
image_path <span class="token operator">=</span> self<span class="token punctuation">.</span>images_paths<span class="token punctuation">[</span>index<span class="token punctuation">]</span>
image <span class="token operator">=</span> Image<span class="token punctuation">.</span><span class="token builtin">open</span><span class="token punctuation">(</span>image_path<span class="token punctuation">)</span>
image <span class="token operator">=</span> image<span class="token punctuation">.</span>convert<span class="token punctuation">(</span><span class="token string">'RGB'</span><span class="token punctuation">)</span>
target <span class="token operator">=</span> self<span class="token punctuation">.</span>targets<span class="token punctuation">[</span>index<span class="token punctuation">]</span>
<span class="token keyword">if</span> self<span class="token punctuation">.</span>transform<span class="token punctuation">:</span>
image <span class="token operator">=</span> self<span class="token punctuation">.</span>transform<span class="token punctuation">(</span>image<span class="token punctuation">)</span>
<span class="token keyword">return</span> image<span class="token punctuation">,</span> target
</code></pre>
<p>The dataset is a simple zip file with the classes in their individual folders, what’s interesting is how we transforms the images for training</p>
<p><strong>Some dataset stats</strong></p>
<pre class=" language-python"><code class="prism language-python"><span class="token punctuation">{</span>
<span class="token string">"Flying_Birds"</span><span class="token punctuation">:</span> <span class="token number">8164</span><span class="token punctuation">,</span>
<span class="token string">"Large_QuadCopters"</span><span class="token punctuation">:</span> <span class="token number">4886</span><span class="token punctuation">,</span>
<span class="token string">"Small_QuadCopters"</span><span class="token punctuation">:</span> <span class="token number">3612</span><span class="token punctuation">,</span>
<span class="token string">"Winged_Drones"</span><span class="token punctuation">:</span> <span class="token number">5531</span>
<span class="token punctuation">}</span>
</code></pre>
<pre class=" language-python"><code class="prism language-python"><span class="token string">"mean = ['0.533459901809692', '0.584880530834198', '0.615305066108704']"</span>
<span class="token string">"std = ['0.172962218523026', '0.167985364794731', '0.184633478522301']"</span>
</code></pre>
<h3 id="transformations">Transformations</h3>
<pre class=" language-python"><code class="prism language-python">train_transforms <span class="token operator">=</span> A<span class="token punctuation">.</span>Compose<span class="token punctuation">(</span><span class="token punctuation">[</span>
<span class="token comment"># A.VerticalFlip(), not useful, flying objects cannot be flipped vertically</span>
A<span class="token punctuation">.</span>HorizontalFlip<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>LongestMaxSize<span class="token punctuation">(</span>max_size<span class="token operator">=</span><span class="token number">500</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>Normalize<span class="token punctuation">(</span>mean<span class="token operator">=</span>self<span class="token punctuation">.</span>mean<span class="token punctuation">,</span> std<span class="token operator">=</span>self<span class="token punctuation">.</span>std<span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>PadIfNeeded<span class="token punctuation">(</span>min_height<span class="token operator">=</span><span class="token number">500</span><span class="token punctuation">,</span> min_width<span class="token operator">=</span><span class="token number">500</span><span class="token punctuation">,</span> border_mode<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">,</span> always_apply<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">,</span> value<span class="token operator">=</span>self<span class="token punctuation">.</span>mean<span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>Resize<span class="token punctuation">(</span>height<span class="token operator">=</span><span class="token number">224</span><span class="token punctuation">,</span> width<span class="token operator">=</span><span class="token number">224</span><span class="token punctuation">,</span> always_apply<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>Rotate<span class="token punctuation">(</span>limit<span class="token operator">=</span><span class="token number">30</span><span class="token punctuation">,</span> border_mode<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">,</span> always_apply<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">,</span> value<span class="token operator">=</span>self<span class="token punctuation">.</span>mean<span class="token punctuation">)</span><span class="token punctuation">,</span>
A<span class="token punctuation">.</span>Cutout<span class="token punctuation">(</span>num_holes<span class="token operator">=</span><span class="token number">2</span><span class="token punctuation">,</span> max_h_size<span class="token operator">=</span><span class="token number">48</span><span class="token punctuation">,</span> max_w_size<span class="token operator">=</span><span class="token number">48</span><span class="token punctuation">,</span> p<span class="token operator">=</span><span class="token number">0.9</span><span class="token punctuation">,</span> fill_value<span class="token operator">=</span>self<span class="token punctuation">.</span>mean<span class="token punctuation">)</span><span class="token punctuation">,</span>
AT<span class="token punctuation">.</span>ToTensor<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token punctuation">]</span><span class="token punctuation">)</span>
</code></pre>
<p>The Series of Transformations are</p>
<ul>
<li><code>Horizontal Flip</code>
<ul>
<li>looking at the dataset, we cannot do vertical flip (birds flying upside down ? quadcopters upside down ? doesn’t seem right), so we go for horizontal flip</li>
</ul>
</li>
<li><code>LongestMaxSize</code>
<ul>
<li>this will resize the image such that the longest size (width/height) is 500 pixels</li>
</ul>
</li>
<li><code>Normalize</code>
<ul>
<li>pretty common, we’ve already calculated the mean and std of the dataset so we normalize it during training</li>
</ul>
</li>
<li><code>PadIfNeeded</code>
<ul>
<li>this is the tricky part, the model needs a square image, and the images in our dataset are not squares, so we pad the image such that it becomes a square of <code>500x500</code></li>
</ul>
</li>
<li><code>Resize</code>
<ul>
<li>MobileNetV2 needs a <code>224x244</code> image so we convert it to such</li>
</ul>
</li>
<li><code>Rotate</code>
<ul>
<li>addtional transformation to just tilt the image by <code>+-30</code> degrees to increase the test accuracy</li>
</ul>
</li>
<li><code>CutOut</code>
<ul>
<li>Cutout regularization is similar to dropout, but works better, as proved in its paper</li>
</ul>
</li>
</ul>
<p><img src="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/images/dataset.png?raw=true" alt="enter image description here"><br>
<strong>Dataset</strong></p>
<p><img src="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/images/aug_dataset.png?raw=true" alt="enter image description here"><br>
<strong>Augmented Dataset</strong></p>
<h3 id="training-logs">Training Logs</h3>
<p><img src="https://raw.githubusercontent.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/b84ac8be9a712732d1bbfae0dad919318680db99/02-MobileNet/images/BatchLoss_Train_loss.svg" alt="enter image description here"><br>
BatchLoss-Train</p>
<p><img src="https://raw.githubusercontent.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/b84ac8be9a712732d1bbfae0dad919318680db99/02-MobileNet/images/EpochLoss_Test_loss.svg" alt="enter image description here"><br>
EpochLoss-Test</p>
<p><img src="https://raw.githubusercontent.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/b84ac8be9a712732d1bbfae0dad919318680db99/02-MobileNet/images/EpochLoss_Train_loss.svg" alt="enter image description here"><br>
EpochLoss-Train</p>
<p><img src="https://raw.githubusercontent.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/b84ac8be9a712732d1bbfae0dad919318680db99/02-MobileNet/images/EpochAccuracy_Train_accuracy.svg" alt="enter image description here"><br>
EpochAccuracy-Train</p>
<p><img src="https://raw.githubusercontent.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/b84ac8be9a712732d1bbfae0dad919318680db99/02-MobileNet/images/EpochAccuracy_Test_accuracy.svg" alt="enter image description here"><br>
EpochAccuracy-Test</p>
<h3 id="misclassifications">Misclassifications</h3>
<pre class=" language-python"><code class="prism language-python"><span class="token punctuation">{</span>
<span class="token string">'Flying_Birds'</span><span class="token punctuation">:</span> <span class="token number">18</span><span class="token punctuation">,</span>
<span class="token string">'Large_QuadCopters'</span><span class="token punctuation">:</span> <span class="token number">133</span><span class="token punctuation">,</span>
<span class="token string">'Small_QuadCopters'</span><span class="token punctuation">:</span> <span class="token number">748</span><span class="token punctuation">,</span>
<span class="token string">'Winged_Drones'</span><span class="token punctuation">:</span> <span class="token number">324</span>
<span class="token punctuation">}</span>
</code></pre>
<p><img src="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/images/misclassifications.png?raw=true" alt="enter image description here"></p>
<p><strong>Classification Report</strong></p>
<pre class=" language-text"><code class="prism language-text"> precision recall f1-score support
Flying_Birds 0.96 0.99 0.98 2449
Large_QuadCopters 0.58 0.91 0.71 1466
Small_QuadCopters 0.84 0.31 0.45 1084
Winged_Drones 0.94 0.80 0.87 1659
accuracy 0.82 6658
macro avg 0.83 0.75 0.75 6658
weighted avg 0.85 0.82 0.80 6658
Flying_Birds ... Winged_Drones
Flying_Birds 2431 ... 12
Large_QuadCopters 29 ... 50
Small_QuadCopters 12 ... 24
Winged_Drones 59 ... 1335
[4 rows x 4 columns]
</code></pre>
<p><strong>Confusion Matrix</strong></p>
<p><img src="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/images/confusion_matrix.png?raw=true" alt="enter image description here"></p>
<h2 id="instructions-for-streamlit--deploy-to-heroku">Instructions for Streamlit & Deploy to Heroku</h2>
<p>Refer to <strong><a href="https://github.com/satyajitghana/TSAI-DeepVision-EVA4.0-Phase-2/blob/master/02-MobileNet/ifo-app">ifo-app</a></strong> for source code</p>
<p>Install streamlit</p>
<pre class=" language-shell"><code class="prism language-shell">pip install streamlit
</code></pre>
<p>Now after wrting <a href="http://app.py">app.py</a> simply run <code>streamlit run app.py</code>, check if everything works</p>
<p>Cool, Now let’s Deploy!</p>
<p>Make sure your requirements.txt uses these version of torch and torchvision, there are for cpu and python3.6 which heroku uses</p>
<p><strong>requirements.txt</strong></p>
<pre class=" language-text"><code class="prism language-text">https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp36-cp36m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp36-cp36m-linux_x86_64.whl
</code></pre>
<p>You can find pytorch releases here: <a href="https://download.pytorch.org/whl/torch_stable.html">https://download.pytorch.org/whl/torch_stable.html</a></p>
<p>now <strong><code>Procfile</code></strong></p>
<pre class=" language-text"><code class="prism language-text">web: sh setup.sh && streamlit run app.py
</code></pre>
<p>Initialize a empty git repo</p>
<pre class=" language-shell"><code class="prism language-shell">git init
git add .
git commit -m "initial build"
</code></pre>
<p>Now create a heroku project and push to it !</p>
<pre class=" language-shell"><code class="prism language-shell">heroku create
git push heroku master
</code></pre>
<p>NOTE: you can also connect your github repo to heroku and it’ll build on push to github</p>
</div>
</body>
</html>