This project is part of Course 2 of the Deep Learning Specialization by Andrew Ng. The goal was to train a neural network to recommend positions for football playersβspecifically focusing on the goalkeeper. But more importantly, the real challenge was to prevent overfitting and improve generalization of the model using two popular regularization techniques:
- L2 Regularization
- Dropout
I implemented a 3-layer deep neural network (using NumPy only β no high-level frameworks) with the following structure:
- Input Layer
- Hidden Layer 1 - ReLU + Dropout
- Hidden Layer 2 - ReLU + Dropout
- Output Layer - Sigmoid (for binary classification)
Here's a visual snapshot of the kind of data setup or game dynamics our model might try to understand π
The blue circles represent French players, and the red ones are opponents. The model learns from this kind of setup to suggest optimal positions, especially for the goalkeeper, under different threat conditions.
- Penalized large weights to prevent the model from overfitting the training data.
- In the cost function, I added the L2 penalty term:
L2_cost = (Ξ» / (2 * m)) * sum(W**2)
- In backpropagation, I adjusted gradients like this:
dW += (Ξ» / m) * W
- Randomly turned off neurons during training to reduce reliance on specific pathways.
- Applied dropout only on hidden layers β not on the input or output.
- Used inverted dropout:
- Created dropout masks:
D = (np.random.rand(...) < keep_prob).astype(int)
- Applied masks and scaled the activations:
A *= D
,A /= keep_prob
- During backprop, I did the same for
dA
.
- Created dropout masks:
I modified the forward pass to include dropout on the hidden layers:
D1 = (np.random.rand(A1.shape[0], A1.shape[1]) < keep_prob).astype(int)
A1 = (A1 * D1) / keep_prob