Skip to content

[SYCL] Enable discard_events mode for the Level Zero #6533

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Aug 22, 2022

Conversation

againull
Copy link
Contributor

@againull againull commented Aug 5, 2022

  • Teach the Level Zero plugin piEnqueue* functions to accept nullptr instead of a pointer to the output event. In this case event is created internally and is not visible externally.
  • Introduce RefCountExternal for pi_event which allows to track external references to an event. It allows to do some optimizations if we know that event is not externally visible. These optimizations are going to be implemented in subsequent PRs.
  • Don't create proxy event for batch if there are no externally visible events in the batch.

@againull againull requested review from a team as code owners August 5, 2022 16:50
@againull againull requested a review from sergey-semenov August 5, 2022 16:50
@againull
Copy link
Contributor Author

againull commented Aug 5, 2022

/verify with intel/llvm-test-suite#1132

@againull againull requested a review from smaslov-intel August 5, 2022 16:59
@againull
Copy link
Contributor Author

againull commented Aug 5, 2022

/verify with intel/llvm-test-suite#1132

@againull
Copy link
Contributor Author

againull commented Aug 5, 2022

/verify with intel/llvm-test-suite#1132

@againull
Copy link
Contributor Author

againull commented Aug 8, 2022

SYCL/Plugin/level_zero_events_caching.cpp failure is unrelated to this PR, fix is uploaded for review here: intel/llvm-test-suite#1138

HIP backend failures are unrelated because changes are isolated to L0 plugin.

@againull againull requested a review from smaslov-intel August 10, 2022 00:02
@againull
Copy link
Contributor Author

@smaslov-intel Could you please review this PR?

@@ -1374,6 +1377,10 @@ struct _pi_event : _pi_object {
// being visible to the host at all.
bool Completed = {false};

// Indicates that event is internal, i.e. it is visible inside the L0 plugin
// only.
bool Internal = {false};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we instead do similar as we do for queue

pi_uint32 RefCountExternal{1};
?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do optimization (2-event model) only for discarded events in in-order queue then bool variable seems enough. But if we want to extend optimization for events which are not discarded then we need this ref count of external references to know when event is not externally visible anymore.
So I used the ext ref count approach as suggested.

* Use RefCountExternal to keep track of external references
* Rename PI_EXT_ONEAPI_QUEUE_DISCARD_EVENTS_MODE_ENABLE -> PI_EXT_ONEAPI_QUEUE_DISCARD_EVENTS
* Pass property to all backends
@againull againull requested a review from smaslov-intel August 19, 2022 22:38
@@ -686,7 +690,7 @@ inline static pi_result createEventAndAssociateQueue(
// Append this Event to the CommandList, if any
if (CommandList != Queue->CommandListMap.end()) {
CommandList->second.append(*Event);
PI_CALL(piEventRetain(*Event));
(*Event)->RefCount.increment();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: why is this change needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

piEventRetain increments external ref count, but we need to increment only internal ref count here. Same logic as for pi_queue.

Copy link
Contributor

@smaslov-intel smaslov-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@againull againull merged commit 1372120 into intel:sycl Aug 22, 2022
@againull againull deleted the discard branch December 3, 2022 00:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants