-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
introduce ilab
profile set doc
#130
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Charlie Doern <cdoern@redhat.com>
1. User needs to set: | ||
a. Model Path for SDG, Eval, Serving, Training | ||
b. GPUs for SDG, Eval, Serving, Training | ||
c. Training Config Per-GPU (based on vram) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c. Training Config Per-GPU (based on vram) | |
c. Training Config Per-GPU (based on vRAM) |
a. Model Path for SDG, Eval, Serving, Training | ||
b. GPUs for SDG, Eval, Serving, Training |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Serving just for vLLM, or vLLM and Llama CPP?
|
||
### Workflow | ||
|
||
The user will run `ilab profile set` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this be required before running other ilab
commands? If not, what is the path for a user that does not do this?
|
||
The user will run `ilab profile set` | ||
Alongside the various model paths and GPU amounts, this command will | ||
set the train profile for the following scenarions: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set the train profile for the following scenarions: | |
set the train profile for the following scenarios: |
2. single consumer GPU | ||
3. multi consumer GPU | ||
4. single server GPU | ||
5. multi server GPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are terms like server
and consumer
familiar to users? I myself don't fully understand what this means.
5. multi server GPU | ||
6. MacOS (once MPS support is added) | ||
|
||
There is also a Choose for me option which reads the Nvidia cards on the system and assigns a cfg+train profile based off the amount of vRAM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also a Choose for me option which reads the Nvidia cards on the system and assigns a cfg+train profile based off the amount of vRAM | |
There is also a "Choose for me" option which reads the Nvidia cards on the system and assigns a cfg+train profile based off the amount of vRAM |
What if I don't have a Nvidia card?
[1] CPU | ||
[2] Single Consumer GPU | ||
[3] Single Server GPU | ||
[4] Multi Consumer GPU | ||
[5] Multi Server GPU | ||
[6] MacOS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the assumption here systems 0-5 are Linux?
@cdoern what is going on with this? |
No description provided.