Skip to content

Commit

Permalink
Remove deprecated XLA GPU flags.
Browse files Browse the repository at this point in the history
  • Loading branch information
justinjfu committed Dec 12, 2024
1 parent 99d675a commit 1021603
Showing 1 changed file with 0 additions and 6 deletions.
6 changes: 0 additions & 6 deletions docs/gpu_performance_tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,8 @@ example, we can add this to the top of a Python file:
```python
import os
os.environ['XLA_FLAGS'] = (
'--xla_gpu_enable_triton_softmax_fusion=true '
'--xla_gpu_triton_gemm_any=True '
'--xla_gpu_enable_async_collectives=true '
'--xla_gpu_enable_latency_hiding_scheduler=true '
'--xla_gpu_enable_highest_priority_async_stream=true '
)
```

Expand All @@ -58,9 +55,6 @@ training on Nvidia GPUs](https://github.com/NVIDIA/JAX-Toolbox/blob/main/rosetta

### Code generation flags

* **--xla_gpu_enable_triton_softmax_fusion** This flag enables an automatic
softmax fusion, based on pattern-matching backed by Triton code generation.
The default value is False.
* **--xla_gpu_triton_gemm_any** Use the Triton-based GEMM (matmul) emitter for
any GEMM that it supports. The default value is False.

Expand Down

0 comments on commit 1021603

Please sign in to comment.