Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GENERAL] Minor improvements #110

Merged
merged 1 commit into from
May 17, 2021
Merged

[GENERAL] Minor improvements #110

merged 1 commit into from
May 17, 2021

Conversation

ptillet
Copy link
Collaborator

@ptillet ptillet commented May 17, 2021

  • Load libcuda.so.1 if libcuda.so is not there. Error if both aren't
    there.
  • Multiple grad_to_none
  • Benchmark dataframe printed along with name

* Load libcuda.so.1 if libcuda.so is not there. Error if both aren't
there.
* Multiple grad_to_none
* Benchmark dataframe printed along with name
@ptillet ptillet merged commit 6342d8f into master May 17, 2021
@ptillet ptillet deleted the minor-fixup branch June 14, 2021 17:13
ptillet added a commit that referenced this pull request Jul 27, 2021
* Load libcuda.so.1 if libcuda.so is not there. Error if both aren't
there.
* Support for multiple grad_to_none in triton.testing.do_bench
* Benchmark dataframe printed along with name
ptillet added a commit that referenced this pull request Sep 12, 2022
dfukalov pushed a commit to dfukalov/triton that referenced this pull request Feb 19, 2023
…ayernorm_tutorial_to_fwd_pass

Change layernorm tutorial unit test and benchmark to run forward pass.
ptillet added a commit that referenced this pull request Apr 1, 2024
pingzhuu pushed a commit to siliconflow/triton that referenced this pull request Apr 2, 2024
* Load libcuda.so.1 if libcuda.so is not there. Error if both aren't
there.
* Support for multiple grad_to_none in triton.testing.do_bench
* Benchmark dataframe printed along with name
oraluben pushed a commit to oraluben/triton that referenced this pull request Sep 11, 2024
Signed-off-by: Ilya Enkovich <ilya.enkovich@intel.com>
pawelszczerbuk pushed a commit that referenced this pull request Nov 7, 2024
Support packing multiple allocations along rows.
This changes from interval tracking to bitmap tracking of the memory
to allow handling allocating along two dimensions.
gglin001 pushed a commit to gglin001/triton that referenced this pull request Nov 13, 2024
Signed-off-by: Ilya Enkovich <ilya.enkovich@intel.com>
stephen-huan pushed a commit to stephen-huan/triton that referenced this pull request Dec 24, 2024
Signed-off-by: Ilya Enkovich <ilya.enkovich@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant