-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FP16 types are reported as unsupported for CUDA BE on compile time after cf6cc662 #1799
Comments
@vladimirlaz, could you please provide CUDA spec link where it says that FP16 is supported? |
I'm asking because nvptx target in clang doesn't define presence of float16 (FP16/half whatever) type whereas for exampler spir target does it llvm/clang/lib/Basic/Targets/SPIR.h Line 68 in f9226d2
|
The target support FP16 since sm_53 I believe. This is a clash with the declared Nvidia's OpenCL capabilities. I'm aware of this, I have a patch to enable it locally but it is entangled with builtins fixes. This reminds me, any particular reason you are using |
BTW: by default we are using sm_30. I tried to uplift it up to sm_75 (including sm_53 - I also find this in some post, i did not find spec), but it is still reported as unsupported. here is the link: https://docs.nvidia.com/cuda/cufft/index.html#half-precision-transforms |
@Naghasan, @Fznamznon I am going to proceed with pulldown and need workaround for the problem. @Naghasan do you have any ETA for the fix?
I would apply option 2 if there are no objections and fix will ready reasonably fast (to avoid problems with ongoing LLVM pull downs). |
@vladimirlaz Enable it if the triple has |
I have prepared workaround (skip check for SYCL CUDA BE target tripple) 816febf |
After applying comment the a4f4fa9 was submitted to sycl branch. |
@Fznamznon @AlexeySachkov, @erichkeane can you comment on __fp16 vs _Float16 please? |
The target extension type for SPIR-V is essentially target("spirv.TypeName", <image type>, <int params>). Most of the work to support translation of these types has already happened beforehand, so the primary step here is to enable translation work in SPIRVWriter as well as making the SPIRVBuiltinHelpers work with target types as well. Constructing LLVM IR from SPIR-V using these types is not yet supported, mainly out of uncertainty of the proper interface to let the resultant consumers indicate that they wish to support these types. Original commit: KhronosGroup/SPIRV-LLVM-Translator@951a6ad
The expected representation is: target("spirv.JointMatrixINTEL", %element_type, %rows%, %cols%, %scope%, %use%, (optional) %element_type_interpretation%) TODO: figure out, how to deal with the switch from old API (Matrix has Layout) to new API (Layout was removed) Depends on: intel#1799 intel#8343 Original commit: KhronosGroup/SPIRV-LLVM-Translator@ee03f5f
The problem was detected during LLVM pull down testing: http://ci.llvm.intel.com:8010/#/builders/37/builds/1152
It looks like a bug in diagnostics (AFAIK CUDA supports FP16) introduced by cf6cc66 and the tests passed before that patch.
The text was updated successfully, but these errors were encountered: