Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Reserve memory for native shuffle writer per partition #988

Merged
merged 4 commits into from
Oct 14, 2024

Conversation

viirya
Copy link
Member

@viirya viirya commented Sep 30, 2024

Which issue does this PR close?

Closes #887.

Rationale for this change

What changes are included in this PR?

How are these changes tested?

@Kontinuation
Copy link
Member

I've copied the tests on my branch to this PR and the test hangs:

running 6 tests
test execution::datafusion::shuffle_writer::test::test_slot_size ... ok
test execution::datafusion::shuffle_writer::test::test_pmod ... ok
test execution::datafusion::shuffle_writer::test::test_insert_larger_batch ... ok
test execution::datafusion::shuffle_writer::test::test_insert_smaller_batch ... ok
test execution::datafusion::shuffle_writer::test::test_large_number_of_partitions has been running for over 60 seconds
test execution::datafusion::shuffle_writer::test::test_large_number_of_partitions_spilling has been running for over 60 seconds
^C

It is possibly caused by deadlocking on buffered_partitions.lock() when spilling is triggered.

@viirya
Copy link
Member Author

viirya commented Oct 1, 2024

Thanks. I knew the cause of the deadlocks. I'm going to revamp some codes.

@viirya viirya force-pushed the revise_shuffle_memory branch 2 times, most recently from 64c7c0d to d25837a Compare October 9, 2024 15:47
@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 33.97%. Comparing base (c3023c5) to head (e678cb0).
Report is 25 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main     #988      +/-   ##
============================================
- Coverage     34.03%   33.97%   -0.07%     
+ Complexity      875      857      -18     
============================================
  Files           112      112              
  Lines         43289    43426     +137     
  Branches       9572     9622      +50     
============================================
+ Hits          14734    14752      +18     
- Misses        25521    25630     +109     
- Partials       3034     3044      +10     
Flag Coverage Δ
?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@viirya
Copy link
Member Author

viirya commented Oct 9, 2024

Hmm, these tests for large partition number shuffle fail on MacOS runners only. And no stack trace...But I cannot reproduce it locally.

@viirya
Copy link
Member Author

viirya commented Oct 9, 2024

Okay, it is the error I expected before:

ret: Err(ArrowError(ExternalError(IoError(Custom { kind: Uncategorized, error: PathError { path: "/var/folders/t_/mmhnh941511_hp2lwh383bp00000gn/T/.tmpQv8o2b/.tmpioYozN", err: Os { code: 24, kind: Uncategorized, message: "Too many open files" } } })), None))

But I had increase it by ulimit. It doesn't help.


#[test]
#[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`
#[cfg(not(target_os = "macos"))] // Github MacOS runner fails with "Too many open files".
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests fail on MacOS runners with "Too many open files" error. ulimit cannot help too.

I skip them on MacOS runners. We have ubuntu runners to test them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test shuffle_write_test(10000, 10, 200, Some(10 * 1024 * 1024)) spilled 1700 times, it spills too frequently for data of this size. Seems that the excessive spilling problem is inevitable if we reserve full batch capacity for the arrow builder.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR seems like an important improvement because it now uses the memory pool features. Perhaps we can follow up with optimizations to reduce spilling. wdyt @Kontinuation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Let's merge this.

I'm also considering adding a native sort-based shuffle writer that works better with constraint resources.

Copy link
Member Author

@viirya viirya Oct 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've discussed to support sort-based shuffle in the native shuffle writer, similar to Spark shuffle, in the early development. So I think it is on our roadmap though it is not urgent at that moment.

@viirya viirya requested a review from andygrove October 9, 2024 23:50
@andygrove
Copy link
Member

I'm testing this PR out now, in conjunction with some other PRs because I currently have a reproducible deadlock caused by memory pool issues, as far as I can tell.

Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @viirya

@andygrove andygrove merged commit e146cfa into apache:main Oct 14, 2024
74 checks passed
@viirya
Copy link
Member Author

viirya commented Oct 14, 2024

Thanks @andygrove @Kontinuation

@viirya viirya deleted the revise_shuffle_memory branch October 14, 2024 15:27
andygrove added a commit to andygrove/datafusion-comet that referenced this pull request Oct 15, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 15, 2024
viirya added a commit that referenced this pull request Oct 15, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 16, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 16, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 16, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 16, 2024
viirya added a commit to viirya/arrow-datafusion-comet that referenced this pull request Oct 16, 2024
andygrove pushed a commit that referenced this pull request Oct 19, 2024
* Revert "chore: Revert "chore: Reserve memory for native shuffle writer per partition (#988)" (#1020)"

This reverts commit 8d097d5.

* fix

* fix

* fix

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Memory over-reservation when running native shuffle write
4 participants