forked from pierluigiferrari/ssd_keras
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathkeras_ssd7.py
314 lines (272 loc) · 20.7 KB
/
keras_ssd7.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
'''
A small 7-layer Keras model with SSD architecture. Also serves as a template to build arbitrary network architectures.
Copyright (C) 2017 Pierluigi Ferrari
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import numpy as np
from keras.models import Model
from keras.layers import Input, Lambda, Conv2D, MaxPooling2D, BatchNormalization, ELU, Reshape, Concatenate, Activation
from keras_layer_AnchorBoxes import AnchorBoxes
def build_model(image_size,
n_classes,
min_scale=0.1,
max_scale=0.9,
scales=None,
aspect_ratios_global=[0.5, 1.0, 2.0],
aspect_ratios_per_layer=None,
two_boxes_for_ar1=True,
limit_boxes=True,
variances=[1.0, 1.0, 1.0, 1.0],
coords='centroids',
normalize_coords=False):
'''
Build a Keras model with SSD architecture, see references.
The model consists of convolutional feature layers and a number of convolutional
predictor layers that take their input from different feature layers.
The model is fully convolutional.
The implementation found here is a smaller version of the original architecture
used in the paper (where the base network consists of a modified VGG-16 extended
by a few convolutional feature layers), but of course it could easily be changed to
an arbitrarily large SSD architecture by following the general design pattern used here.
This implementation has 7 convolutional layers and 4 convolutional predictor
layers that take their input from layers 4, 5, 6, and 7, respectively.
In case you're wondering why this function has so many arguments: All arguments except
the first two (`image_size` and `n_classes`) are only needed so that the anchor box
layers can produce the correct anchor boxes. In case you're training the network, the
parameters passed here must be the same as the ones used to set up `SSDBoxEncoder`.
In case you're loading trained weights, the parameters passed here must be the same
as the ones used to produce the trained weights.
Some of these arguments are explained in more detail in the documentation of the
`SSDBoxEncoder` class.
Note: Requires Keras v2.0 or later. Training currently works only with the
TensorFlow backend (v1.0 or later).
Arguments:
image_size (tuple): The input image size in the format `(height, width, channels)`.
n_classes (int): The number of categories for classification including
the background class (i.e. the number of positive classes +1 for
the background calss).
min_scale (float, optional): The smallest scaling factor for the size of the anchor boxes as a fraction
of the shorter side of the input images. Defaults to 0.1.
max_scale (float, optional): The largest scaling factor for the size of the anchor boxes as a fraction
of the shorter side of the input images. All scaling factors between the smallest and the
largest will be linearly interpolated. Note that the second to last of the linearly interpolated
scaling factors will actually be the scaling factor for the last predictor layer, while the last
scaling factor is used for the second box for aspect ratio 1 in the last predictor layer
if `two_boxes_for_ar1` is `True`. Defaults to 0.9.
scales (list, optional): A list of floats containing scaling factors per convolutional predictor layer.
This list must be one element longer than the number of predictor layers. The first `k` elements are the
scaling factors for the `k` predictor layers, while the last element is used for the second box
for aspect ratio 1 in the last predictor layer if `two_boxes_for_ar1` is `True`. This additional
last scaling factor must be passed either way, even if it is not being used.
Defaults to `None`. If a list is passed, this argument overrides `min_scale` and
`max_scale`. All scaling factors must be greater than zero.
aspect_ratios_global (list, optional): The list of aspect ratios for which anchor boxes are to be
generated. This list is valid for all predictor layers. The original implementation uses more aspect ratios
for some predictor layers and fewer for others. If you want to do that, too, then use the next argument instead.
Defaults to `[0.5, 1.0, 2.0]`.
aspect_ratios_per_layer (list, optional): A list containing one aspect ratio list for each predictor layer.
This allows you to set the aspect ratios for each predictor layer individually. If a list is passed,
it overrides `aspect_ratios_global`. Defaults to `None`.
two_boxes_for_ar1 (bool, optional): Only relevant for aspect ratio lists that contain 1. Will be ignored otherwise.
If `True`, two anchor boxes will be generated for aspect ratio 1. The first will be generated
using the scaling factor for the respective layer, the second one will be generated using
geometric mean of said scaling factor and next bigger scaling factor. Defaults to `True`, following the original
implementation.
limit_boxes (bool, optional): If `True`, limits box coordinates to stay within image boundaries.
This would normally be set to `True`, but here it defaults to `False`, following the original
implementation.
variances (list, optional): A list of 4 floats >0 with scaling factors (actually it's not factors but divisors
to be precise) for the encoded predicted box coordinates. A variance value of 1.0 would apply
no scaling at all to the predictions, while values in (0,1) upscale the encoded predictions and values greater
than 1.0 downscale the encoded predictions. If you want to reproduce the configuration of the original SSD,
set this to `[0.1, 0.1, 0.2, 0.2]`, provided the coordinate format is 'centroids'. Defaults to `[1.0, 1.0, 1.0, 1.0]`.
coords (str, optional): The box coordinate format to be used. Can be either 'centroids' for the format
`(cx, cy, w, h)` (box center coordinates, width, and height) or 'minmax' for the format
`(xmin, xmax, ymin, ymax)`. Defaults to 'centroids'.
normalize_coords (bool, optional): Set to `True` if the model is supposed to use relative instead of absolute coordinates,
i.e. if the model predicts box coordinates within [0,1] instead of absolute coordinates. Defaults to `False`.
Returns:
model: The Keras SSD model.
predictor_sizes: A Numpy array containing the `(height, width)` portion
of the output tensor shape for each convolutional predictor layer. During
training, the generator function needs this in order to transform
the ground truth labels into tensors of identical structure as the
output tensors of the model, which is in turn needed for the cost
function.
References:
https://arxiv.org/abs/1512.02325v5
'''
n_predictor_layers = 4 # The number of predictor conv layers in the network
# Get a few exceptions out of the way first
if aspect_ratios_global is None and aspect_ratios_per_layer is None:
raise ValueError("`aspect_ratios_global` and `aspect_ratios_per_layer` cannot both be None. At least one needs to be specified.")
if aspect_ratios_per_layer:
if len(aspect_ratios_per_layer) != n_predictor_layers:
raise ValueError("It must be either aspect_ratios_per_layer is None or len(aspect_ratios_per_layer) == {}, but len(aspect_ratios_per_layer) == {}.".format(n_predictor_layers, len(aspect_ratios_per_layer)))
if (min_scale is None or max_scale is None) and scales is None:
raise ValueError("Either `min_scale` and `max_scale` or `scales` need to be specified.")
if scales:
if len(scales) != n_predictor_layers+1:
raise ValueError("It must be either scales is None or len(scales) == {}, but len(scales) == {}.".format(n_predictor_layers+1, len(scales)))
else: # If no explicit list of scaling factors was passed, compute the list of scaling factors from `min_scale` and `max_scale`
scales = np.linspace(min_scale, max_scale, n_predictor_layers+1)
if len(variances) != 4: # We need one variance value for each of the four box coordinates
raise ValueError("4 variance values must be pased, but {} values were received.".format(len(variances)))
variances = np.array(variances)
if np.any(variances <= 0):
raise ValueError("All variances must be >0, but the variances given are {}".format(variances))
# Set the aspect ratios for each predictor layer. These are only needed for the anchor box layers.
if aspect_ratios_per_layer:
aspect_ratios_conv4 = aspect_ratios_per_layer[0]
aspect_ratios_conv5 = aspect_ratios_per_layer[1]
aspect_ratios_conv6 = aspect_ratios_per_layer[2]
aspect_ratios_conv7 = aspect_ratios_per_layer[3]
else:
aspect_ratios_conv4 = aspect_ratios_global
aspect_ratios_conv5 = aspect_ratios_global
aspect_ratios_conv6 = aspect_ratios_global
aspect_ratios_conv7 = aspect_ratios_global
# Compute the number of boxes to be predicted per cell for each predictor layer.
# We need this so that we know how many channels the predictor layers need to have.
if aspect_ratios_per_layer:
n_boxes = []
for aspect_ratios in aspect_ratios_per_layer:
if (1 in aspect_ratios) & two_boxes_for_ar1:
n_boxes.append(len(aspect_ratios) + 1) # +1 for the second box for aspect ratio 1
else:
n_boxes.append(len(aspect_ratios))
n_boxes_conv4 = n_boxes[0]
n_boxes_conv5 = n_boxes[1]
n_boxes_conv6 = n_boxes[2]
n_boxes_conv7 = n_boxes[3]
else: # If only a global aspect ratio list was passed, then the number of boxes is the same for each predictor layer
if (1 in aspect_ratios_global) & two_boxes_for_ar1:
n_boxes = len(aspect_ratios_global) + 1
else:
n_boxes = len(aspect_ratios_global)
n_boxes_conv4 = n_boxes
n_boxes_conv5 = n_boxes
n_boxes_conv6 = n_boxes
n_boxes_conv7 = n_boxes
# Input image format
img_height, img_width, img_channels = image_size[0], image_size[1], image_size[2]
# Design the actual network
x = Input(shape=(img_height, img_width, img_channels))
normed = Lambda(lambda z: z/127.5 - 1., # Convert input feature range to [-1,1]
output_shape=(img_height, img_width, img_channels),
name='lambda1')(x)
conv1 = Conv2D(32, (5, 5), name='conv1', strides=(1, 1), padding="same", kernel_initializer='he_normal')(normed)
conv1 = BatchNormalization(axis=3, momentum=0.99, name='bn1')(conv1) # Tensorflow uses filter format [filter_height, filter_width, in_channels, out_channels], hence axis = 3
conv1 = ELU(name='elu1')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), name='pool1')(conv1)
conv2 = Conv2D(48, (3, 3), name='conv2', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool1)
conv2 = BatchNormalization(axis=3, momentum=0.99, name='bn2')(conv2)
conv2 = ELU(name='elu2')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), name='pool2')(conv2)
conv3 = Conv2D(64, (3, 3), name='conv3', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool2)
conv3 = BatchNormalization(axis=3, momentum=0.99, name='bn3')(conv3)
conv3 = ELU(name='elu3')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2), name='pool3')(conv3)
conv4 = Conv2D(64, (3, 3), name='conv4', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool3)
conv4 = BatchNormalization(axis=3, momentum=0.99, name='bn4')(conv4)
conv4 = ELU(name='elu4')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), name='pool4')(conv4)
conv5 = Conv2D(48, (3, 3), name='conv5', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool4)
conv5 = BatchNormalization(axis=3, momentum=0.99, name='bn5')(conv5)
conv5 = ELU(name='elu5')(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2), name='pool5')(conv5)
conv6 = Conv2D(48, (3, 3), name='conv6', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool5)
conv6 = BatchNormalization(axis=3, momentum=0.99, name='bn6')(conv6)
conv6 = ELU(name='elu6')(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2), name='pool6')(conv6)
conv7 = Conv2D(32, (3, 3), name='conv7', strides=(1, 1), padding="same", kernel_initializer='he_normal')(pool6)
conv7 = BatchNormalization(axis=3, momentum=0.99, name='bn7')(conv7)
conv7 = ELU(name='elu7')(conv7)
# The next part is to add the convolutional predictor layers on top of the base network
# that we defined above. Note that I use the term "base network" differently than the paper does.
# To me, the base network is everything that is not convolutional predictor layers or anchor
# box layers. In this case we'll have four predictor layers, but of course you could
# easily rewrite this into an arbitrarily deep base network and add an arbitrary number of
# predictor layers on top of the base network by simply following the pattern shown here.
# Build the convolutional predictor layers on top of conv layers 4, 5, 6, and 7
# We build two predictor layers on top of each of these layers: One for classes (classification), one for box coordinates (localization)
# We precidt `n_classes` confidence values for each box, hence the `classes` predictors have depth `n_boxes * n_classes`
# We predict 4 box coordinates for each box, hence the `boxes` predictors have depth `n_boxes * 4`
# Output shape of `classes`: `(batch, height, width, n_boxes * n_classes)`
classes4 = Conv2D(n_boxes_conv4 * n_classes, (3, 3), strides=(1, 1), padding="valid", name='classes4', kernel_initializer='he_normal')(conv4)
classes5 = Conv2D(n_boxes_conv5 * n_classes, (3, 3), strides=(1, 1), padding="valid", name='classes5', kernel_initializer='he_normal')(conv5)
classes6 = Conv2D(n_boxes_conv6 * n_classes, (3, 3), strides=(1, 1), padding="valid", name='classes6', kernel_initializer='he_normal')(conv6)
classes7 = Conv2D(n_boxes_conv7 * n_classes, (3, 3), strides=(1, 1), padding="valid", name='classes7', kernel_initializer='he_normal')(conv7)
# Output shape of `boxes`: `(batch, height, width, n_boxes * 4)`
boxes4 = Conv2D(n_boxes_conv4 * 4, (3, 3), strides=(1, 1), padding="valid", name='boxes4', kernel_initializer='he_normal')(conv4)
boxes5 = Conv2D(n_boxes_conv5 * 4, (3, 3), strides=(1, 1), padding="valid", name='boxes5', kernel_initializer='he_normal')(conv5)
boxes6 = Conv2D(n_boxes_conv6 * 4, (3, 3), strides=(1, 1), padding="valid", name='boxes6', kernel_initializer='he_normal')(conv6)
boxes7 = Conv2D(n_boxes_conv7 * 4, (3, 3), strides=(1, 1), padding="valid", name='boxes7', kernel_initializer='he_normal')(conv7)
# Generate the anchor boxes
# Output shape of `anchors`: `(batch, height, width, n_boxes, 8)`
anchors4 = AnchorBoxes(img_height, img_width, this_scale=scales[0], next_scale=scales[1], aspect_ratios=aspect_ratios_conv4,
two_boxes_for_ar1=two_boxes_for_ar1, limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors4')(boxes4)
anchors5 = AnchorBoxes(img_height, img_width, this_scale=scales[1], next_scale=scales[2], aspect_ratios=aspect_ratios_conv5,
two_boxes_for_ar1=two_boxes_for_ar1, limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors5')(boxes5)
anchors6 = AnchorBoxes(img_height, img_width, this_scale=scales[2], next_scale=scales[3], aspect_ratios=aspect_ratios_conv6,
two_boxes_for_ar1=two_boxes_for_ar1, limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors6')(boxes6)
anchors7 = AnchorBoxes(img_height, img_width, this_scale=scales[3], next_scale=scales[4], aspect_ratios=aspect_ratios_conv7,
two_boxes_for_ar1=two_boxes_for_ar1, limit_boxes=limit_boxes, variances=variances, coords=coords, normalize_coords=normalize_coords, name='anchors7')(boxes7)
# Reshape the class predictions, yielding 3D tensors of shape `(batch, height * width * n_boxes, n_classes)`
# We want the classes isolated in the last axis to perform softmax on them
classes4_reshaped = Reshape((-1, n_classes), name='classes4_reshape')(classes4)
classes5_reshaped = Reshape((-1, n_classes), name='classes5_reshape')(classes5)
classes6_reshaped = Reshape((-1, n_classes), name='classes6_reshape')(classes6)
classes7_reshaped = Reshape((-1, n_classes), name='classes7_reshape')(classes7)
# Reshape the box coordinate predictions, yielding 3D tensors of shape `(batch, height * width * n_boxes, 4)`
# We want the four box coordinates isolated in the last axis to compute the smooth L1 loss
boxes4_reshaped = Reshape((-1, 4), name='boxes4_reshape')(boxes4)
boxes5_reshaped = Reshape((-1, 4), name='boxes5_reshape')(boxes5)
boxes6_reshaped = Reshape((-1, 4), name='boxes6_reshape')(boxes6)
boxes7_reshaped = Reshape((-1, 4), name='boxes7_reshape')(boxes7)
# Reshape the anchor box tensors, yielding 3D tensors of shape `(batch, height * width * n_boxes, 8)`
anchors4_reshaped = Reshape((-1, 8), name='anchors4_reshape')(anchors4)
anchors5_reshaped = Reshape((-1, 8), name='anchors5_reshape')(anchors5)
anchors6_reshaped = Reshape((-1, 8), name='anchors6_reshape')(anchors6)
anchors7_reshaped = Reshape((-1, 8), name='anchors7_reshape')(anchors7)
# Concatenate the predictions from the different layers and the assosciated anchor box tensors
# Axis 0 (batch) and axis 2 (n_classes or 4, respectively) are identical for all layer predictions,
# so we want to concatenate along axis 1
# Output shape of `classes_merged`: (batch, n_boxes_total, n_classes)
classes_concat = Concatenate(axis=1, name='classes_concat')([classes4_reshaped,
classes5_reshaped,
classes6_reshaped,
classes7_reshaped])
# Output shape of `boxes_final`: (batch, n_boxes_total, 4)
boxes_concat = Concatenate(axis=1, name='boxes_concat')([boxes4_reshaped,
boxes5_reshaped,
boxes6_reshaped,
boxes7_reshaped])
# Output shape of `anchors_final`: (batch, n_boxes_total, 8)
anchors_concat = Concatenate(axis=1, name='anchors_concat')([anchors4_reshaped,
anchors5_reshaped,
anchors6_reshaped,
anchors7_reshaped])
# The box coordinate predictions will go into the loss function just the way they are,
# but for the class predictions, we'll apply a softmax activation layer first
classes_softmax = Activation('softmax', name='classes_softmax')(classes_concat)
# Concatenate the class and box coordinate predictions and the anchors to one large predictions tensor
# Output shape of `predictions`: (batch, n_boxes_total, n_classes + 4 + 8)
predictions = Concatenate(axis=2, name='predictions')([classes_softmax, boxes_concat, anchors_concat])
model = Model(inputs=x, outputs=predictions)
# Get the spatial dimensions (height, width) of the convolutional predictor layers, we need them to generate the default boxes
# The spatial dimensions are the same for the `classes` and `boxes` predictors
predictor_sizes = np.array([classes4._keras_shape[1:3],
classes5._keras_shape[1:3],
classes6._keras_shape[1:3],
classes7._keras_shape[1:3]])
return model, predictor_sizes