Replies: 1 comment 2 replies
-
|
Figured it out eventually. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m working on fine-tuning crnn_vgg16_b recognition model.
I’ve run into a bit of a challenge: my training loss is fluctuating rapidly, and I haven’t been able to stabilize it despite trying a few different setups.
Since I’m still learning the ropes, I’d appreciate any advice you could provide on the following hyperparameters to help improve stability and performance:
Here’s a quick summary of my setup:
Dataset size: 1 million words
Train/Validation split: 80%/20%
Tried Hyperparameters:
--epochs 20 --lr 0.0001 --batch_size 16--epochs 50 --lr 0.000001 --batch_size 32--epochs 20 --lr 0.000001 --batch_size 32Beta Was this translation helpful? Give feedback.
All reactions