Skip to content

Actions: pytorch/torchchat

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
18,054 workflow runs
18,054 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MPS/MacOS #1412: Pull request #1171 synchronize by swolchok
September 24, 2024 15:45 12m 16s gh/swolchok/4/head
September 24, 2024 15:45 12m 16s
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run parallel prefill #1798: Pull request #1171 synchronize by swolchok
September 24, 2024 15:45 28m 45s gh/swolchok/4/head
September 24, 2024 15:45 28m 45s
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
pull #2834: Pull request #1171 synchronize by swolchok
September 24, 2024 15:45 39m 49s gh/swolchok/4/head
September 24, 2024 15:45 39m 49s
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories #1523: Pull request #1171 synchronize by swolchok
September 24, 2024 15:45 20m 47s gh/swolchok/4/head
September 24, 2024 15:45 20m 47s
export.py: fix custom SDPA type conversion logic & re-enable for bfloat16
Run the README instructions - with stories - on MacOS #1523: Pull request #1171 synchronize by swolchok
September 24, 2024 15:45 12m 21s gh/swolchok/4/head
September 24, 2024 15:45 12m 21s
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MPS/MacOS #1411: Pull request #1184 opened by swolchok
September 24, 2024 15:44 12m 56s gh/swolchok/6/head
September 24, 2024 15:44 12m 56s
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories - on MacOS #1522: Pull request #1184 opened by swolchok
September 24, 2024 15:44 12m 21s gh/swolchok/6/head
September 24, 2024 15:44 12m 21s
Use default max_seq_length of 128 when loading ExecuTorch models
Run parallel prefill #1797: Pull request #1184 opened by swolchok
September 24, 2024 15:44 28m 47s gh/swolchok/6/head
September 24, 2024 15:44 28m 47s
Use default max_seq_length of 128 when loading ExecuTorch models
pull #2833: Pull request #1184 opened by swolchok
September 24, 2024 15:44 39m 26s gh/swolchok/6/head
September 24, 2024 15:44 39m 26s
Use default max_seq_length of 128 when loading ExecuTorch models
Run the README instructions - with stories #1522: Pull request #1184 opened by swolchok
September 24, 2024 15:44 20m 57s gh/swolchok/6/head
September 24, 2024 15:44 20m 57s
Use default max_seq_length of 128 when loading ExecuTorch models
Run the aoti runner with CUDA using stories #1907: Pull request #1184 opened by swolchok
September 24, 2024 15:44 9m 33s gh/swolchok/6/head
September 24, 2024 15:44 9m 33s
Android artifact update (#1138)
Run the aoti runner with CUDA using stories #1906: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 9m 46s main
September 24, 2024 07:18 9m 46s
Android artifact update (#1138)
Run parallel prefill #1796: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 28m 54s main
September 24, 2024 07:18 28m 54s
Android artifact update (#1138)
Run the README instructions - with stories - on MPS/MacOS #1410: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 10m 39s main
September 24, 2024 07:18 10m 39s
Android artifact update (#1138)
pull #2832: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 38m 44s main
September 24, 2024 07:18 38m 44s
Android artifact update (#1138)
Run the README instructions - with stories #1521: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 21m 9s main
September 24, 2024 07:18 21m 9s
Android artifact update (#1138)
Run the README instructions - with stories - on MacOS #1521: Commit 04ea309 pushed by kirklandsign
September 24, 2024 07:18 11m 24s main
September 24, 2024 07:18 11m 24s
enable llava on torchchat
pull #2831: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 39m 28s llava-e2e
September 24, 2024 03:50 39m 28s
enable llava on torchchat
Run parallel prefill #1795: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 28m 32s llava-e2e
September 24, 2024 03:50 28m 32s
enable llava on torchchat
Run the README instructions - with stories - on MPS/MacOS #1409: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 10m 18s llava-e2e
September 24, 2024 03:50 10m 18s
enable llava on torchchat
Run the README instructions - with stories #1520: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 20m 46s llava-e2e
September 24, 2024 03:50 20m 46s
enable llava on torchchat
Run the aoti runner with CUDA using stories #1905: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 9m 24s llava-e2e
September 24, 2024 03:50 9m 24s
enable llava on torchchat
Run the README instructions - with stories - on MacOS #1520: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:50 11m 15s llava-e2e
September 24, 2024 03:50 11m 15s
enable llava on torchchat
Run the README instructions - with stories #1519: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:37 20m 26s llava-e2e
September 24, 2024 03:37 20m 26s
enable llava on torchchat
pull #2830: Pull request #1183 synchronize by Gasoonjia
September 24, 2024 03:37 38m 35s llava-e2e
September 24, 2024 03:37 38m 35s