Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Onprem] Support for Different Type of GPUs + Small Bugfix #1356

Merged
merged 3 commits into from
Nov 4, 2022

Conversation

michaelzhiluo
Copy link
Collaborator

@michaelzhiluo michaelzhiluo commented Nov 1, 2022

Adds support for the 1080, 2080, A5000, A6000 GPU types (GPUs NOT supported by Ray's backend) and allows scheduling for such types.

Also, fixes the bug introduced in #1287.

Test:

sky status

Local clusters:
NAME                USER    HEAD_IP        RESOURCES                 COMMAND                         
shelby              ubuntu  3.144.21.13    [{'V100': 1}]             sky exec shelby --gpus V1...    
thomas              ubuntu  3.94.208.15    [{'1080': 1, '2080': 1}]  sky launch -c thomas --gpus... 
  • sky launch -c thomas --gpus 1080:1 -- 'echo hi' works and occupies the right placement groups
  • sky launch -c shelby --gpus V100:1 -- 'echo hi' still works and occupies the right placement group (+ Ray's automatic GPU placement group)

@@ -231,10 +231,15 @@ def add_gang_scheduling_placement_group(
'CPU': backend_utils.DEFAULT_TASK_CPU_DEMAND
} for _ in range(num_nodes)]

ray_supported_accs = backend_utils.supported_ray_accs()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry could you explain why is supported_ray_accs needed?
For example, we can sky launch gpunode --gpus M60 --cloud aws and Ray will correctly recognize M60 as a gpu even it's not on the list right?

ubuntu@ip-172-31-82-9:~$ ray status
======== Autoscaler status: 2022-11-01 21:22:40.810182 ========
Node status
---------------------------------------------------------------
Healthy:
 1 ray.head.default
Pending:
 (no pending nodes)
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Usage:
 0.0/4.0 CPU
 0.0/1.0 GPU
 0.0/1.0 M60
 0.0/1.0 accelerator_type:M60
 0.00/17.552 GiB memory
 0.00/8.776 GiB object_store_memory

Demands:
 (no resource demands)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Weilin for the quick review!

When M60 or other GPUs are detected by Ray, it will naturally add a GPU to the placement group under the hood, as you see in your M60 logs.

I tested this with a 1080 GPU yesterday, and the GPU field did not show up. So, to be safe, some of this code handles the cases where the GPU field does not show up.

Copy link
Member

@infwinston infwinston Nov 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we understand why 1080 can't not be detected by Ray? I was asking because the list here isn't the cause as M60 is not in it but can be correctly identified.

I traced Ray's code a bit. Looks like how they identify #num_gpu is by the function _autodetect_num_gpus below
https://github.com/ray-project/ray/blob/f39d323ed5916b67042407231f5b91851e8fa0b5/python/ray/_private/resource_spec.py#L268
It's by either GPUtil.getGPUs() which relies on nvidia-smi or /proc/driver/nvidia/gpus/ which relies on nvidia-driver I believe.

Then as long as nvidia-smi or its driver works, 1080 GPU should be correctly detected. Could you share more on why Ray failed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great find! This is my family member's GPU and its fair to say nvdia-smi was not installed and could be why Ray failed to detect.

@WoosukKwon
Copy link
Collaborator

@michaelzhiluo
Ray could detect TITAN V GPUs on my machine:

$ ray status
======== Autoscaler status: 2022-11-01 15:06:23.656898 ========
Node status
---------------------------------------------------------------
Healthy:
 1 node_a25d5edb0e115a8ee02542d40ff6868d055789a81d6041717e19d11e
Pending:
 (no pending nodes)
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Usage:
 0.0/48.0 CPU
 0.0/8.0 GPU
 0.0/1.0 accelerator_type:TITAN
 0.00/320.967 GiB memory
 0.00/141.549 GiB object_store_memory

Demands:
 (no resource demands)

I believe it's a problem due to uninstalled (or misinstalled) nvidia-smi, and we can do nothing for that case.

@michaelzhiluo
Copy link
Collaborator Author

Ok, I will work on the fix to detect if the ray status has GPU or not. It seems that it has nothing to do with what types of GPUs it is, but how the Ray detects GPU. Thanks @infwinston @WoosukKwon

@Michaelvll
Copy link
Collaborator

Michaelvll commented Nov 1, 2022

Ok, I will work on the fix to detect if the ray status has GPU or not. It seems that it has nothing to do with what types of GPUs it is, but how the Ray detects GPU. Thanks @infwinston @WoosukKwon

For the ray clusters, we can pass --num-gpus to the ray start in the ray yaml to let the ray cluster know how many gpus on each node. Just as what we did for the GCP:

ray stop; ray start --disable-usage-stats --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml {{"--resources='%s'" % custom_resources if custom_resources}} --num-gpus=$SKY_NUM_GPUS || exit 1;

@michaelzhiluo
Copy link
Collaborator Author

Thanks Zhanghao for the great suggestion! Just tested and it works, no need to care about whether Ray detects the gpu or not.

Copy link
Member

@infwinston infwinston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments and questions. Hope they help. Others please chime in as I'm not super familiar with on-prem codes and also not sure how to properly test it.

sky/backends/backend_utils.py Outdated Show resolved Hide resolved
sky/backends/backend_utils.py Outdated Show resolved Hide resolved
'1080',
'2080',
'A5000'
'A6000']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to add TITAN Xp (used in a RISE server)?

Also do you think nvidia-smi --list-gpus is more robust for detecting #num-gpus as it's agnostic to gpu names?

weichiang@freddie:~$ nvidia-smi --list-gpus
GPU 0: NVIDIA TITAN Xp (UUID: GPU-67995609-af7b-27e6-2024-0f5f3c837c1a)
GPU 1: NVIDIA TITAN Xp (UUID: GPU-7dc58de3-9a7a-3e5f-6231-c1ac72af9e0d)
GPU 2: NVIDIA TITAN Xp (UUID: GPU-6dcf1597-36aa-b71c-5e4d-713931d79f9b)
GPU 3: NVIDIA TITAN Xp (UUID: GPU-a35c7b48-c7a8-016a-aa46-f82a654bc9a4)
GPU 4: NVIDIA TITAN Xp (UUID: GPU-72bfeabf-998e-ae6e-7ab2-4b0c066869ec)
GPU 5: NVIDIA TITAN Xp (UUID: GPU-6b3b0b2a-89db-24b4-46b5-aab244bcaad8)

weichiang@blaze:~$ nvidia-smi --list-gpus
GPU 0: Tesla P100-PCIE-16GB (UUID: GPU-7942cb79-543a-c8fe-03e8-172ed446f612)
GPU 1: Tesla P100-PCIE-16GB (UUID: GPU-09e76941-9a87-b1b4-8ef7-864c8d98c4af)
GPU 2: Tesla P100-PCIE-16GB (UUID: GPU-a9edaeaa-5c02-2b26-2ad8-7a5d51f0d1ac)
GPU 3: Tesla P100-PCIE-16GB (UUID: GPU-49a19b7d-5a7e-7ee5-8168-c4e77d648998)
GPU 4: Tesla P100-PCIE-16GB (UUID: GPU-b3b998a7-b941-040e-68f6-5f533bcf5636)
GPU 5: Tesla P100-PCIE-16GB (UUID: GPU-8fc7187f-350a-1b62-2ba2-51466153dd06)
GPU 6: Tesla P100-PCIE-16GB (UUID: GPU-7c8c9f76-d06e-2d3d-2f63-873fc9925ecf)
GPU 7: Tesla P100-PCIE-16GB (UUID: GPU-90773d83-1578-d5fb-ccc0-a30b7c0b4e66)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, we should think about how we can do away with this list and use just nvidia-smi to parse the resources available. If this PR is time-sensitive, we can do it in a different PR

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do this in a separate Pr imo. There has to be a better way to detect GPUs, even with onprem cluster wihtout nvidia-smi installed ( as was this code's assumption, works on all machines with linux/unix). Very interested in discussion for this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be a safe assumption? If nvidia-smi is not installed, very likely Ray will also fail to schedule CUDA jobs.

If we want less assumption, Ray's approach of listing /proc/driver/nvidia/gpus
https://github.com/ray-project/ray/blob/f39d323ed5916b67042407231f5b91851e8fa0b5/python/ray/_private/resource_spec.py#L281 should be better.
It looks pretty robust (similar to lspci, only assume nvidia-driver is installed) and name-agnostic.

weichiang@blaze:~$ ls /proc/driver/nvidia/gpus
0000:04:00.0  0000:06:00.0  0000:07:00.0  0000:08:00.0  0000:0c:00.0  0000:0d:00.0  0000:0e:00.0  0000:0f:00.0
weichiang@blaze:~$ cat /proc/driver/nvidia/gpus/0000\:04\:00.0/information
Model: 		 Tesla P100-PCIE-16GB
IRQ:   		 722
GPU UUID: 	 GPU-7942cb79-543a-c8fe-03e8-172ed446f612
Video BIOS: 	 86.00.41.00.06
Bus Type: 	 PCIe
DMA Size: 	 47 bits
DMA Mask: 	 0x7fffffffffff
Bus Location: 	 0000:04:00.0
Device Minor: 	 0
GPU Excluded:	 No

This will auto cover TITAN Xp and TITAN V that currently not in the list but used in RISE's and Woosuk's server? Does that make sense?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems a bit different on Windows (how Ray handles it), which is the OS used by one of my family members (along with nvidia-smi not being available despite being able to use the GPU). I'll look deeper into it for a future PR.

sky/resources.py Show resolved Hide resolved
Copy link
Member

@infwinston infwinston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some follow-up suggestions. If you think this PR is too urgent, feel free to ignore them and raise issues to fix in the future.

sky/backends/backend_utils.py Outdated Show resolved Hide resolved
'1080',
'2080',
'A5000'
'A6000']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be a safe assumption? If nvidia-smi is not installed, very likely Ray will also fail to schedule CUDA jobs.

If we want less assumption, Ray's approach of listing /proc/driver/nvidia/gpus
https://github.com/ray-project/ray/blob/f39d323ed5916b67042407231f5b91851e8fa0b5/python/ray/_private/resource_spec.py#L281 should be better.
It looks pretty robust (similar to lspci, only assume nvidia-driver is installed) and name-agnostic.

weichiang@blaze:~$ ls /proc/driver/nvidia/gpus
0000:04:00.0  0000:06:00.0  0000:07:00.0  0000:08:00.0  0000:0c:00.0  0000:0d:00.0  0000:0e:00.0  0000:0f:00.0
weichiang@blaze:~$ cat /proc/driver/nvidia/gpus/0000\:04\:00.0/information
Model: 		 Tesla P100-PCIE-16GB
IRQ:   		 722
GPU UUID: 	 GPU-7942cb79-543a-c8fe-03e8-172ed446f612
Video BIOS: 	 86.00.41.00.06
Bus Type: 	 PCIe
DMA Size: 	 47 bits
DMA Mask: 	 0x7fffffffffff
Bus Location: 	 0000:04:00.0
Device Minor: 	 0
GPU Excluded:	 No

This will auto cover TITAN Xp and TITAN V that currently not in the list but used in RISE's and Woosuk's server? Does that make sense?

Copy link
Member

@infwinston infwinston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please consider filing an issue for supporting more GPU types :)

@michaelzhiluo michaelzhiluo merged commit fd6c335 into master Nov 4, 2022
@michaelzhiluo michaelzhiluo deleted the onprem-custom branch November 5, 2022 19:58
sumanthgenz pushed a commit to sumanthgenz/skypilot that referenced this pull request Jan 15, 2023
Michaelvll added a commit that referenced this pull request Jan 18, 2023
* add cost tracking for clusters that handles launching, re-starting, getting status, stopping and downing clusters, but no auto-stopping

* address Romil PR comments

* address Zhanghao PR comments

* fix nit

* address more PR comments

* address last wave of PR comments

* sky

* address fixing argument for requested resources and fixing spot tests for CI

* address more PR comments

* make tests resources a list to prevent errors

* fix tests again

* address PR comments, including adding fetchall to fix status one cluster only bug

* fix PR comments

* change progress bar interference on stop/down

* add sky report instead of showing cost on other commands

* address cost report PR comments

* address more PR comments on sky report

* [Core] Port ray 2.0.1 (#1133)

* update ray node provider to 2.0.0

update patches

Adapt to ray functions in 2.0.0

update azure-cli version for faster installation

format

[Onprem] Automatically install sky dependencies (#1116)

* Remove root user, move ray cluster to admin

* Automatically install sky dependencies

* Fix admin alignment

* Fix PR

* Address romil's comments

* F

* Addressed Romil's comments

Add `--retry-until-up`, `--region`, `--zone`, and `--idle-minutes-to-autostop` for interactive nodes (#1207)

* Add --retry-until-up flag for interactive nodes

* Add --region flag for interactive nodes

* Add --idle-minutes-to-autostop flag for interactive nodes

* Add --zone flag for interactive nodes

* Update help messages

* Address nit

Add all region option in catalog fetcher and speed up azure fetcher (#1204)

* Port changes

* format

* add t2a exclusion back

* fix A100 for GCP

* fix aws fetching for p4de.24xlarge

* Fill GPUInfo

* fix

* address part of comments

* address comments

* add test for A100

* patch GpuInfo

* Add generation info

* Add capabilities back to azure and fix aws

* fix azure catalog

* format

* lint

* remove zone from azure

* fix azure

* Add analyze for csv

* update catalog analysis

* format

* backward compatible for azure_catalog

* yapf

* fix GCP catalog

* fix A100-80GB

* format

* increase version number

* only keep useful columns for aws

* remove capabilities from azure

* add az to AWS

Revert "Add `--retry-until-up`, `--region`, `--zone`, and `--idle-minutes-to-autostop` for interactive nodes" (#1220)

Revert "Add `--retry-until-up`, `--region`, `--zone`, and `--idle-minutes-to-autostop` for interactive nodes (#1207)"

This reverts commit f06416d.

[Storage] Add `StorageMode` to __init__ (#1223)

* Add storage mode to __init__

* fix

[Example] Minimal containerized app example (#1212)

* Container example

* parenthesis

* Add explicit StorageMode

* lint

Fix Mac Version in Setup.py (#1224)

Fix mac

Reduce iops for aws instances (#1221)

* set the default iops to be same as console for AWS

* fix

Revert "Reduce iops for aws instances" (#1229)

Revert "Reduce iops for aws instances (#1221)"

This reverts commit 29f1458.

update back compat test

* parent 06afd93
author Zhanghao Wu <zhanghao.wu@outlook.com> 1665364265 -0700
committer Zhanghao Wu <zhanghao.wu@outlook.com> 1665899898 -0700

parent 06afd93
author Zhanghao Wu <zhanghao.wu@outlook.com> 1665364265 -0700
committer Zhanghao Wu <zhanghao.wu@outlook.com> 1665899681 -0700

Support for autodown

Change API to terminate

fix flag

address comment

format

Rename terminate to down

add smoke test

format

fix syntax

use gcp for autodown test

fix smoke test

fix smoke test

address comments

Switch back to terminate

Change back to tear down

Change to tear down

fix comment

* Fix rebase issue

* address comments

* address

* fix setup.py

* upgrade to 2.0.1

* Fix docs for ray version

* Fix example

* fix backward compatibility test

* Fix onprem job submission

* add steps for backward compat test

* docs: Remove version from docs html titles. (#1303)

Remove version from docs html titles.

* Fix unnecessary ssh hanging issue on Ray (#851)

* Fix ray hanging ssh issue

* Fix

* change the order back

* Update node status after first attempt

* Set `--rename-dir-limit` for gcsfuse to allow dir renames (#1296)

Set rename_dir_lim for gcsfuse

* Docs: polish `sky.Task` doc strings. (#1302)

* WIP

* Polish sky.Task doc strings.

* docs: expose Task (a subset of methods); hide Dag.

* Tweak Task method order; in docs display methods by source order.

* CLI docs: tweak order; tweak `spot launch`.

* Address comments.

* Code block formatting.

* [Launch/Backward Compatibility] Fix incorrect Ray YAML issue (#1287)

* Fix incorrect Ray YAML issue

* yapf

* fix

* comments

* [Storage] add `--implicit-dirs` for gcsfuse (#1312)

add --implicit-dirs

* Improving README. (#1308)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Port landing paras to docs index.rst.

* [UX] Disable python output buffer by default (#1290)

diable python output buffer

* [Storage][Filemounts] Set relative dir root to workdir (#1315)

Set relative dir root to workdir for file_mounts

* Fix Sky Storage Delete more than 256 Items/folders + Bulk Deletion Tests (#1285)

* fix

* Add romil's suggestions

* Add bulk deletion tests

* ok

* Fix

* [Storage] Add lazy unmount flag (#1320)

Add lazy unmount flag

* [Core] Fix skylet checking (#1325)

* Fix skylet checking

* exclude grep

* [UX] remove stacktrace for pipe and ssh info (#1324)

* UX: remove stacktrace for pipe and ssh info

* Add comment

* Avoid ray output in the logs

* format

* revert ssh quiet option

* [Dependency] Fix colorama dependency issue with awscli (#1323)

* Fix colorama dependency issue with awscli

* fix ux for storage delete

* Add roadmap. (#1317)

* Add roadmap.

* Update ROADMAP.md

* Fix SKY_NODE_RANK environment variable (#1291)

* Add flag for retrieving internal node ips

* Ensure SKY_NODE_RANK is 0 for head and stable

* Clearer comment for get_internal_ips

* Handle different num nodes correctly

* Address PR comments

* Address nits

* [Spot] Fix race condition for spot logs (#1329)

* Fix race condition for spot logs

* fix

* fix comment

* address comments

* add comment

* Add TPU Pod to doc (#1318)

* add pod to doc

* Apply suggestions from code review

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* comments

* comments

* update bucket note

* Apply suggestions from code review

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* update

* update

* fix

* fix

* comments

* fix

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* [UX] Add environment variable `SKY_NUM_GPUS_PER_NODE` (#1337)

* add SKY_NUM_GPUS_PER_NODE

* increase multi-node progress timeout

* pin torch version

* add comment

* address comment

* fix smoke test

* address comments

* [Image] Fix blocking by unattended-upgrade (#1347)

* Fix blocking by unattended-upgrade

* adopt to gcp and azure

* [Test/Azure] Fix the torch version in examples for smoke test and change the credential for Azure  (#1330)

* Upgrade images for three clouds

* Fix cuda version

* pin cuda version for torch

* Fix torch version

* fix comments

* Fix azure provider

* fix credential

* revert back to previous azure image

* switch back to cuda 11.3 for pytorch due to azure's image

* fix torch installation

* increase the multi-node timeout

* Update sky/clouds/azure.py

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* revert aws image version

* pin cu113 for huggingface

* Add comment

* format

* Update sky/clouds/aws.py

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* Update sky/clouds/gcp.py

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* revert gcp image

* Fix doc

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* [Docs] Reorganizing docs. (#1316)

* Reorganizing docs.

* V2.

* Reorg + rewording

* Address comments

* Remove 'convenient'

* Update `SKY_NODE_RANK` docs (#1350)

* Add tip for node rank to docs

* Update formatting

* Indent fix.

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* Add `--retry-until-up`, `--region`, `--zone`, and `--idle-minutes-to-autostop` for interactive nodes (v2) (#1297)

* Add --region, --zone, --idle-minutes-to-autostop, and --retry-until-up for
interactive nodes

* Update user_requested_resources

* Add --down for interactive nodes and refactor auto{stop,down} edge case

* Refactor click options

* Revert "Refactor click options"

This reverts commit 10a06a9.

* Fix TPU Pod (#1358)

* fix pod

* yapf

* Minor fix for yapf warnings (#1362)

* [Docs] Clarify Storage mounting details (#1365)

* fix incorrect statements

* fix incorrect statements

* fix incorrect statements

* Fix bugs in GCP A100 prices (#1368)

* Fix GCP A100 price bugs

* yapf

* [Custom Image] Support tag for the images and global regions (#1366)

* Support image tag for AWS

* add gcp image support

* address comments

* fix

* remove pandas warning

* Add example for using ubuntu1804

* add ubuntu 1804 in the test

* Enforce trying us regions first

* format

* address comments

* address comments

* Add docs and rename methods

* Add fetch global regions for GCP

* Add all regions for Azure

* rename and add doc

* remvoe accidently added folder

* fix service_catalog

* remove extra line

* Address comments

* mkdir for catalog path

* increase waiting time in test

* fix test recovery

* format

* [UX/Doc] Add disk size in resource display and a minor fix for the doc (#1371)

Minor fix for docs and ux

* [Onprem] Support for Different Type of GPUs + Small Bugfix (#1356)

* Ok

* Great suggestion from Zhanghao

* fix

* Update tutorial.rst

* Pin `torch` in various examples to avoid cuda version issues. (#1378)

* tutorial.rst: pin `torch` to avoid version issues.

Tested:
- Ran on both AWS and GCP.

* Fixes two more yamls

* [Env] SKYPILOT_JOB_ID for all tasks (#1377)

* Add run id for normal job

* add example for the run id

* fix env_check

* fix env_check

* fix

* address comments

* Rename to SKYPILOT_JOB_ID

* rename the controller's job id to avoid confusion

* rename env variables

* fix

* [Core] Add support for detach setup (#1379)

* Add support for async setup

* Fix logging

* Add test for async setup

* add parens

* fix

* refactor a bit

* Fix status

* fix smoke test

* rename

* fix is_cluster_idle function

* format

* address comments

* fix

* Add setup failed

* Fix failed setup

* Add comment

* Add comments

* format

* fix logs

* format

* address comments

* Minor UX fix: `sky cancel` should not print stacktraces. (#1385)

* Minor UX fix: `sky cancel` should not print stacktraces.

* Wording fix.

* exit 1

* [UX] Disable ssh connection sharing for setup (#1390)

* Disable ssh connection sharing for setup

* format

* remove redundant

* fix type hint

* Docs: multi-node clarifications, and ssh into workers. (#1363)

* Fixes #1338: add docs on logging into workers.

* Fixes #1340 and fixes #1339.

* Address comments

* Reword.

* Hint.

* Fix Logging for `sky launch` on new machine (#1382)

* ok

* ok

* Ok

* ok

* Unify methods

* ok

* fix

* [Image] Support passing AMIs for different regions (#1384)

* image dict in resources

* fix

* fix

* add tests

* add per region example

* address comments

* Fix checking

* fix

* fix smoke test

* [LocalDockerBackend] Update `is_local_cluster` check for docker backend (#1396)

Update is_local_cluster check for LocalDockerBackend

* [Setup] unset CUDA_VISIBLE_DEVICES for detach setup (#1404)

* unset CUDA_VISIBLE_DEVICES

* add env check example

* Add setting CUDA_VISIBLE_DEVICES test

* fix

* Update sky/backends/cloud_vm_ray_backend.py

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* format

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* [Spot] Keep SKYPILOT_JOB_ID the same for the same spot job (#1400)

* fix SKYPILOT_JOB_ID

* Fix test

* fix

* format

* Add SKYPILOT_JOB_ID to sky spot queue

* nit

* don't set job_id_env_var for spot controller task

* address comments

* Revert SKYPILOT_JOB_ID in spot queue

* format

* Change default value of task.envs to dict

* [UX] fix the error for the first time `sky launch` (#1405)

* fix ux

* test

* fix no public cloud

* address comments

* Fix logging

* format

* Remove the error type for CLI

* yellow

* fix

* Fix logging

* [Spot] Fix spot recovery for multi node (#1411)

* Add cluster status check even job is RUNNING for multi-node

* Disable autoscaler logs and fix finished when partially preempted

* format

* Add test

* address comments

* update

* Add time

* [Release] Fix pypi description (#1416)

* Fix pypi description

* fix

* format

* [Bug fix] head_ip extraction from Ray stdout (#1421)

* Fix bug in head_ip extraction from Ray stdout after launching cluster by using regex to exactly match ip.

* Remove unneeded comment.

* Update sky/backends/cloud_vm_ray_backend.py

Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com>

* Run yapf and pylint

Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com>

* [Global Regions] Add data fetchers into wheel (#1406)

* Add data fetchers into wheel

* yapf

* Fix gcp fetcher

* Add check

* exclude analyze.py

* Link to blog on README and docs. (#1430)

* [Spot] Let cancel interrupt the spot job (#1414)

* Let cancel interrupt the job

* Add test

* Fix test

* Cancel early

* fix test

* fix test

* Fix exceptions

* pass test

* increase waiting time

* address comments

* add job id

* remove 'auto' in ray.init

* Revert "[Spot] Let cancel interrupt the spot job" (#1432)

Revert "[Spot] Let cancel interrupt the spot job (#1414)"

This reverts commit 3bbf4aa.

* [AWS] Avoid key pair permission issue by using cloud-init for authorized keys (#1427)

* Switch to UserData to add public key for AWS

* fix

* Avoid hardcoding username

* Fix backward compatibility test

* address comments

* address comments

* Minor spot logs fix: don't print job id not provided on spot launch. (#1434)

Minor spot logs fix: don't print job id not provided.

* [Catalog] Remove hardcoded A2 pricing URL & Fix a bug in A2 machine zones (#1426)

* Update no 16xA100-40GB zones

* [Catalog] Remove GCP A2 price URL & Fix GCP A100 zone issues

* Add more type annotations

* Minor

* yapf

* Do not add GCP URL prefix

* Minor

* Address comments

* Address comment1

* Minor

* Add comments about the case when a100.empty is True

* Assert not duplicated

* [Spot] Let cancel interrupt the spot job (#1414) (#1433)

* Let cancel interrupt the job

* Add test

* Fix test

* Cancel early

* fix test

* fix test

* Fix exceptions

* pass test

* increase waiting time

* address comments

* add job id

* remove 'auto' in ray.init

* Fix serialization problem

* refactor a bit

* Fix

* Add comments

* format

* pylint

* revert a format change

* Add docstr

* Move ray.init

* replace ray with multiprocess.Process

* Add test for setup cancelation

* Fix logging

* Fix test

* lint

* Use SIGTERM instead

* format

* Change exception type

* revert to KeyboardInterrupt

* remove

* Fix test

* fix test

* fix test

* typo

* [Usage] Robustify the user hash to avoid empty string (#1442)

* Robustify the user hash to avoid empty string

* fix

* Check valid user hash with hexdecimal

* format

* fix

* Add fallback

* Add comment

* lint

* [Storage] Support multiple files in Storage (#1311)

* Set rename_dir_lim for gcsfuse

* Add support for list of sources for Storage

* fix demo yaml

* tests

* lint

* lint

* test

* add validation

* address zhwu comments

* add error on basename conflicts

* use gsutil cp -n instead of gsutil rsync

* lint

* fix name

* parallelize gsutil rsync

* parallelize aws s3 rsync

* lint

* address comments

* refactor

* lint

* address comments

* update schema

* Logging fixes. (#1452)

* Logging fixes.

* yapf

* sys.exit(1)

* [Storage] Fix copy monuts for file with s3 bucket url (#1457)

* test file download with s3

* fix test

* fix storage file mounts

* format

* remove mkdir for `make_sync_dir_command`

* Print errors for GCP timeout. (#1454)

* [autostop] Support restarting the autostop timer. (#1458)

* [autostop] Support restarting the autostop timer.

* Logging

* Make each job submission call set_active_time_to_now().

* Fix test and pylint.

* Fix comments.

* Change tests; some fixes

* logging remnant

* remnant

* [Spot] Make sure the cluster status is not None when showing (#1464)

* Make sure the cluster status is not None when showing

* Fix another potential issue with NoneType of handle

* Add assert

* fix

* format

* Address comments

* Address comments

* format

* format

* fix

* fix

* fix spot cancellation

* format

* Add a few small warnings to README and CONTRIBUTING. (#1422)

* Add a couple small warnings to README and CONTRIBUTING.

* Update README.md

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>

* Hotfix for spot TPU pod recovery (#1470)

* hotfix

* comment

* [Spot] Better spot logs (#1412)

* Add cluster status check even job is RUNNING for multi-node

* Disable autoscaler logs and fix finished when partially preempted

* format

* Add test

* Better spot logging

* Add logs

* format

* address comments

* address comments part 2

* Finish the logging early

* format

* better logging

* Address comments

* Fix message

* Address comments

* Improve UX for logs to include SSH name and rank (#1380)

* Messy WIP

* Fixes two more yamls

* Improve log UX and ensure stableness

* Remove print statement

* Remove task name from logs

* Fix name for single-node tasks

* Update var names and comments for clarity

* Update logic for single and multi-node clusters

* Cache stable cluster IP list in ResourceHandle

* Properly cache and invalidate stable list

* Add back SKYPILOT_NODE_IPS

* Update log file name

* Refactor backend to use cached stable IP list

* Fix spot test

* Fix formatting

* Refactor ResourceHandle

* Fixes for correctness

* Remove unneeded num_nodes arg

* Fix _gang_schedule_ray_up

* Ensure stable IP list is cached

* Formatting fixes

* Refactor updating stable IPs to be part of handle

* Merge max attempts constant

* Fix ordering for setting TPU name

* Fix bugs and clean up code

* Fix backwards compatibility

* Fix bug with old autostopped clusters

* Fix comment

* Fix assertion statement

* Update assertion message

Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com>

* Fix linting

* Fix retrieving IPs for TPU vm

* Add optimization for updating IPs

* Linting fix

* Update comment

Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>
Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com>

* add cost tracking for clusters that handles launching, re-starting, getting status, stopping and downing clusters, but no auto-stopping

* fix some artifacts from rebase error

* handle linting

* make it cost-report

* address PR changes for approval

* last changes

* address last changes

* move around comments for sort

* add cost_report func to init.all list

Co-authored-by: Sumanth <sumanth@MacBook-Pro-5.local>
Co-authored-by: Sumanth <sumanth@MacBook-Pro-5.attlocal.net>
Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com>
Co-authored-by: Zongheng Yang <zongheng.y@gmail.com>
Co-authored-by: Wei-Lin Chiang <infwinston@gmail.com>
Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com>
Co-authored-by: Michael Luo <michael.luo@berkeley.edu>
Co-authored-by: Isaac Ong <isaacong.jw@gmail.com>
Co-authored-by: ewzeng <46831164+ewzeng@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Donny Greenberg <dongreenberg2@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants