Description
OS:
- Windows 11
- Visual Studio 2022
Target framework version:
- .NET 6.0
- cuDNN v7.6.0
- CUDA 10.1
What I'm trying to achieve
TL DR; classifying images with my GPU instead of CPU.
I'm trying to classify images based on 14 different categories. Currently I have about 40.000 images, but I'm planning to add more to try and get a better dataset. Initially I have trained my dataset with my CPU and had pretty decent accuracy (90-98% in most cases), but the training and prediction speed was rather slow. I saw some articles about a GPU improving this speed. I bought a GPU for this, but the results were rather unexpected.
What did I do?
In my "Environment" tab from the model builder I selected the "Local (GPU)" box and I installed the required extensions and the checks became green. I uninstalled the nuget package I used for the CPU training (SciSharp.TensorFlow.Redist) and installed the one required for the GPU (SciSharp.TensorFlow.Redist-Windows-GPU). When I hit the Train button in the "Train" tab I was amazed by the speed. It flew through the bottleneck computation, indicating the GPU is working (confirmed with GPU usage in my task manager).
However my best MicroAccuracy dropped from ~0.93 to ~0.43 and I get about 8% accuracy in my evaluate tab, which is completely unexpected.
- Model builder config:
{
"Scenario": "ImageClassification",
"DataSource": {
"Type": "Folder",
"Version": 1,
"FolderPath": "path\\To\\Images\\Folder"
},
"Environment": {
"Type": "LocalGPU",
"Version": 1
},
"Type": "TrainingConfig",
"Version": 3,
"TrainingOption": {
"Version": 0,
"Type": "ClassificationTrainingOption",
"TrainingTime": 2147483647,
"Seed": 0
}
}
What could be causing the low accuracy between my CPU and GPU settings?
Do I require more training images, or did I overlook something else? I'm looking forward for any help or suggestions!
Thank you for reading!