Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml : remove ggml_task_type and GGML_PERF #8017

Merged
merged 6 commits into from
Jun 24, 2024
Merged

Conversation

slaren
Copy link
Collaborator

@slaren slaren commented Jun 19, 2024

Removes the phases in ggml_task_type in favor of a single phase that operations can split into any number of phases by using barriers.

Since the barriers require all threads to pass the barrier, operations are always called with the maximum number of threads, and they are responsible of skipping the excess threads if the implementation cannot use them.

Additionally, parallelizes the conversion of src1 to vec_dot_type in mul_mat and mul_mat_id. Since the threads are available regardless, there is no reason to not use them.

Removes the GGML_PERF option and its related fields in ggml_tensor and other structs.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jun 19, 2024
@mofosyne mofosyne added the Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level label Jun 19, 2024
if (!inplace) {
if (ith == 0) {
// memcpy needs to be synchronized across threads to avoid race conditions.
// => do it in INIT phase
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment about INIT phase is no longer relevant

@slaren
Copy link
Collaborator Author

slaren commented Jun 20, 2024

The vulkan build is failing because it has some old code that still has references to ggml_compute_params, but it is not actually used for anything. I will fix it after #7961 is merged to avoid conflicts.

@github-actions github-actions bot added the Vulkan Issues specific to the Vulkan backend label Jun 23, 2024
@github-actions github-actions bot added the build Compilation issues label Jun 23, 2024
@slaren slaren merged commit 95f57bb into master Jun 24, 2024
66 checks passed
@slaren slaren deleted the sl/remove-task-type branch June 24, 2024 01:08
Nexesenex added a commit to Nexesenex/croco.cpp that referenced this pull request Jun 26, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Jun 30, 2024
* ggml : remove ggml_task_type and GGML_PERF

* check abort_callback on main thread only

* vulkan : remove usage of ggml_compute_params

* remove LLAMA_PERF
MagnusS0 pushed a commit to MagnusS0/llama.cpp-normistral-tokenizer that referenced this pull request Jul 1, 2024
* ggml : remove ggml_task_type and GGML_PERF

* check abort_callback on main thread only

* vulkan : remove usage of ggml_compute_params

* remove LLAMA_PERF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Compilation issues ggml changes relating to the ggml tensor library for machine learning Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants