-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature 1057 - Add test and train transforms for image API in the backend #1146
base: nextjs
Are you sure you want to change the base?
Conversation
@@ -9,19 +9,27 @@ | |||
from training.core.trainer import ClassificationTrainer | |||
from training.routes.image.schemas import ImageParams | |||
from training.core.authenticator import FirebaseAuth | |||
import torchvision.transforms as transforms |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [pyright] reported by reviewdog 🐶
Import "torchvision.transforms" could not be resolved (reportMissingImports)
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey keon good work on this so far! And good question! Can you show me how the endpoint currently handles transforms with multiple arguments? i can probably give u a better answer once i see the output
Full disclosure -- verifying transforms is not straightforward but usually its a good sign if the model training in postman completes successfully (like if you put in the MNIST dataset and the model trains well w/ decreasing loss )
but if you want to check 100% that transforms are working correctly -- what i would do generally to make sure is to:
- Modify the code to save a few sample images after applying the transforms.
- After sending a request with a specific transform configuration, you can check the saved images to ensure the transforms are applied correctly.
- Then, you can compare the transformed images with the expected output based on the transform parameters.
Oh and another thing -- use well-understood datasets like MNIST or CIFAR for which the impact of certain transforms is predictable. For example, applying a RandomCrop to images in these datasets should result in visually verifiable changes. You want to compare the performance metrics (accuracy, loss) of models trained with and without the transforms to see if the transforms have the expected impact on model learning.
Lmk if this makes sense. This is just general advice but if i see what the output is currently i can prob give u a better answer.
Also just wondering -- for the frontend part of this, are you thinking of tackling that in this PR once you've got the endpoint all sorted out and everything's looking good on the backend? Or do you wanna handle that in a separate PR? I'm cool with either approach, so just let me know what works best for you!
For the question regarding crop transform affecting layer sizes, I'm not completely sure but i think you can use pooling layers like AdaptiveAvgPool2d or global pooling layers as they could be helpful since they can handle variable input sizes. Definitely worth exploring those options in the frontend/ model architecture phase. |
Adds Test & Train transforms for image API in the backend
Github Issue Number Here: #1057
This allows users to have custom transforms for image training
Testing Methodology
Used postman to send transforms and verify their outcomes, in order ot
Any other considerations
There's stuff to be verified and decided on - I'm not sure if the way that the endpoint handles transforms with multiple arguments is correct, and I'm not sure how to exactly verify the transforms.