Skip to content

[Matrix][SYCL] Add support for bf16's wi_element #5397

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 29, 2022

Conversation

yubingex007-a11y
Copy link
Contributor

Previously, we didn't take bf16's operation into consideration. Now we add support for bf16's wi_element and when we have operations of bf16, we extend to fp32 and do the operations.

@yubingex007-a11y yubingex007-a11y requested a review from a team as a code owner January 26, 2022 09:06
@bader bader changed the title [Matrix][SYCL] add support for bf16's wi_element [Matrix][SYCL] Add support for bf16's wi_element Jan 26, 2022
@alexbatashev
Copy link
Contributor

@AlexeySotkin @dkhaldi I suggest you creating a team on github and adding it as a code owner for this code to https://github.com/intel/llvm/blob/sycl/.github/CODEOWNERS, much like ESIMD extension is added.

@yubingex007-a11y I'll approve this PR if Alexey or Dounia approve

@@ -451,6 +451,276 @@ class wi_element {
}
};

template <size_t NumRows, size_t NumCols, matrix_layout Layout, typename Group>
class wi_element<uint16_t, NumRows, NumCols, Layout, Group> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should specialize this class on bfloat16 type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alexey, we are adding the support of bfloat16 type as the next step in a separate PR.
This PR is to extend the support of the matrix bf16 type represented by unsigned short.
so we are not changing the way we treat bf16 right now.
Instead, we plan on keeping both: unsigned short as bf16, then SYCL bfloat16 type.
We will keep both until the SYCL bfloat16 makes it to the specification (it is experimental right now)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexeySotkin @dkhaldi For now we still consider uint16_t as bf16 in matrix and in the future we will support sycl's bf16.

Comment on lines +506 to +517
static float make_fp32(uint16_t x) {
unsigned int y = x;
y = y << 16;
float *res = reinterpret_cast<float *>(&y);
return *res;
}

static uint16_t make_bf16(float x) {
int *res = reinterpret_cast<int *>(&x);
*res = *res >> 16;
return (uint16_t)*res;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we can't use __spirv_ConvertBF16ToFINTEL and __spirv_ConvertFToBF16INTEL here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use this workaround until we support __spirv_ConvertBF16ToFINTEL and __spirv_ConvertFToBF16INTEL in cpu side.

Copy link
Contributor

@Nuullll Nuullll Mar 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI. I'm working on supporting SYCL_INTEL_bf16_conversion capability on CPU.

@@ -451,6 +451,276 @@ class wi_element {
}
};

template <size_t NumRows, size_t NumCols, matrix_layout Layout, typename Group>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment that uint16_t here is representing bf16 type as we have been doing for the previous matrix functions.
Since the AMX and DPAS implementations don't support uint16_t, this should raise no problem.
Add a comment that the plan is to move towards SYCL bfloat16 as it makes it to the specification.

@@ -451,6 +451,284 @@ class wi_element {
}
};

// uint16_t here represents bf16 type as we have been doing for the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some rewording:
// Note that similarly to the other matrix functions, uint16_t is used here to represent bf16 type. Since the AMX and DPAS implementations don't support uint16_t, this interpretation is possible. This design choice was made before the introduction of SYCL experimental bfloat16 type. Our plan is to move towards using the SYCL bfloat16. But since it is still experimental, we will probably keep both uint16 interpretation and SYCL bfloat16.

}

// For now we use the following functions for convertion(bf16=>fp32,
// fp32=>bf16) as a workaround. In the future we will use
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some rewording:
// We use here the following functions for conversion (bf16=>fp32 and fp32=>bf16). This is a workaround until we are able to use __spirv_ConvertFToBF16INTEL and __spirv_ConvertBF16ToFINTEL once these are supported in the CPU backend

Copy link
Contributor

@dkhaldi dkhaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yubingex007-a11y
Copy link
Contributor Author

ping? @intel/llvm-reviewers-runtime

@yubingex007-a11y
Copy link
Contributor Author

@alexbatashev Hi, could you approve and merge it?

Copy link
Contributor

@alexbatashev alexbatashev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bader or @vladimirlaz or @dm-vodopyanov can you please merge?

@vladimirlaz vladimirlaz merged commit 9f2b7bd into intel:sycl Jan 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants