Skip to content

Linspace should not use double type unconditionally #878

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

oleksandr-pavlyk
Copy link
Contributor

The kernel implementing dpctl.tensor.linspace uses double precision temporary variable causing issues when running on HW without double precision support.

This PR changes the kernel to use float type on such hardware.

It also changes how copying of double precision numpy array to device w/o DP HW support works.
The array is cast to single precision on host before kernel is launched.

Tests are modified so that test suite now passes on Iris Xe integrated card.

  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to an issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • If this PR is a work in progress, are you filing the PR as a draft?

@github-actions
Copy link

github-actions bot commented Aug 8, 2022

@coveralls
Copy link
Collaborator

Coverage Status

Coverage decreased (-0.08%) to 81.777% when pulling e363969 on linspace-should-not-use-double-type-unconditionally into 09de29b on master.

@oleksandr-pavlyk oleksandr-pavlyk merged commit b68b1e4 into master Aug 8, 2022
@oleksandr-pavlyk oleksandr-pavlyk deleted the linspace-should-not-use-double-type-unconditionally branch August 8, 2022 20:12
@github-actions
Copy link

github-actions bot commented Aug 8, 2022

Deleted rendered PR docs from intelpython.github.com/dpctl, latest should be updated shortly. 🤞

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants