Skip to content

Conversation

@GermanAizek
Copy link
Contributor

Isn't it correct to bring the blocks to the sections type, which is int, as above for float?

@pwilkin pwilkin added the vibe-coded Created with heavy use of LLM assistants, requires human verification label Dec 1, 2025
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Dec 1, 2025
@@ -6542,7 +6382,7 @@ static void ggml_compute_backward(
memcpy(&attn_factor, (const float *) tensor->op_params + 8, sizeof(float));
memcpy(&beta_fast, (const float *) tensor->op_params + 9, sizeof(float));
memcpy(&beta_slow, (const float *) tensor->op_params + 10, sizeof(float));
memcpy(&sections, tensor->op_params + 11, sizeof(sections));
memcpy(&sections, (const int *) tensor->op_params + 11, sizeof(sections));
Copy link
Member

@danbev danbev Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is strictly required and I think this is intentional as the type of op_params is:

        int32_t op_params[GGML_MAX_OP_PARAMS / sizeof(int32_t)];

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning vibe-coded Created with heavy use of LLM assistants, requires human verification

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants