Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark 3.5: Spark action to compute the partition stats #9437

Closed
wants to merge 2 commits into from

Conversation

ajantha-bhat
Copy link
Member

@ajantha-bhat ajantha-bhat commented Jan 8, 2024

Depends on #9170

Fixes: #8459

@@ -70,4 +70,10 @@ default RewritePositionDeleteFiles rewritePositionDeletes(Table table) {
throw new UnsupportedOperationException(
this.getClass().getName() + " does not implement rewritePositionDeletes");
}

/** Instantiates an action to compute partition statistics and register it to table metadata */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: Missing dot at the end of the sentence?

import org.apache.iceberg.PartitionStatisticsFile;

/**
* An action to compute partition stats for the latest snapshot and registers it to the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about this? I am not sure I'd mention TableMetadata class and necessarily limit ourselves to the latest snapshot, we may support branches as well in the future.

An action to compute and register partition stats.

* An action to compute partition stats for the latest snapshot and registers it to the
* TableMetadata file
*/
public interface ComputePartitionStats
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Do we have to support branches/tags/snapshot IDs? It is not required in the initial version but I guess it makes sense in general?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should support it. Since stats is mapped to snapshot ID. It should be easy to work with branch and tags. I will handle this in a follow up.

}

private Result doExecute() {
long currentSnapshotId = table.currentSnapshot().snapshotId();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, won't this generate NPE as the current snapshot would be null if the table is empty? Do we have a test for this? Also, shouldn't we return a valid result object with null output file rather than null result?

Broadcast<Table> tableBroadcast = sparkContext.broadcast(serializableTable);
int numShufflePartitions = spark.sessionState().conf().numShufflePartitions();

return manifestBeanDS(table, null, numShufflePartitions)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it actually correct? This code would go via ALL_MANIFESTS table. Shouldn't we only look for manifests in a particular snapshot for which we compute the stats?

In general, I am not sure how I feel about the current approach. It seems the only benefit of using the Spark resources here is to parallelize the read of manifests. However, we bring the entire content of the dataset to the driver and then try to merge it. This operation will be costly and may diminish the benefits of using the cluster resources for reading in the first place. Even though it is an action, it does not necessarily need to utilize the cluster.

I'd probably modify the distributed algorithm and also compare it against a local implementation.

Potential Distributed Algorithm

  • Load entries metadata table for the snapshot for which we compute stats as Dataset<Row>. I believe we can use loadMetadataTable from BaseSparkAction for that.
  • Distribute these records by partition using hash distribution to co-locate entries for the same partition next to each other. Still use plain Row for this.
  • Have either a SQL expression (preferable) or closure (still OK but will require extra serialization) to squash these records and only have one record per partition as output.
  • Collect the squashed results to the driver.
  • Write the records into a partition stats file.

Compared to the current solution, we will not only parallelize the reading phase but also the reduction of entries. This will also lower the transfer and serialization costs, potentially making the distributed approach worth the extra complexity (has to be proved).

Potential Local Algorithm

In any case, I'd also compare a purely local solution with a thread pool.

  • Open all manifests for the given snapshot concurrently.
  • Either use a common concurrent hash map to hold reduced values and update it in each thread OR reduce each manifest concurrently and then merge results across manifests in a single thread.

I'd start by creating an efficient local implementation and testing it with 1 and 10 million files. It should be fairly simple now by using FileGenerationUtil. We have plenty of benchmarks that leverage that. Afterwards, we can see if a distributed approach is necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think, @ajantha-bhat?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it actually correct? This code would go via ALL_MANIFESTS table. Shouldn't we only look for manifests in a particular snapshot for which we compute the stats?

True. I got confused with snapshot().allManifests() to all manifest table. I need to change this.

And Thanks for detailed distributed and local algorithm. For my distributed algorithm, I faced problem with serialization of partitionData (avro class issue) thats why I had to keep most of the logic at Driver.

I am not fully aware about how to implement the distributed algorithm that you have suggested. I will explore on that.

In the mean time you can also review #9170 (which is independent and prerequisite for this PR)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think our decisions in this PR would affect what we do in #9170, so let's wait for some clarity here.

@ajantha-bhat
Copy link
Member Author

I did some benchmarking using the FileGenerationUtil (changes included in the PR TestPartitionStatsPerf).
Looks like the local algorithm is performant compared to distributed one.

case 1: FileGenerationUtil.generateDataFile took 30 minutes to generate 10k partitions with 2 data file entry for each partition.

1.4 seconds - local algorithm
3.3 seconds - distributed algorithm

case 2: FileGenerationUtil.generateDataFile took 25 seconds to generate 20 paritions with 10K data file entry for each partition.

1.7 seconds - local algorithm
4.1 seconds - distributed algorithm

Note: For case 1, I can increase the number of partition some more, but the generation takes hours. Will try it out at night.

"TOTAL_RECORD_COUNT",
"LAST_UPDATED_AT",
"LAST_UPDATED_SNAPSHOT_ID")
.coalesce(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to sort by PARTITION_DATA here? I think it's required by the spec.

@lirui-apache
Copy link
Contributor

Hey @ajantha-bhat , nice feature! Any plans to move this forward?

@ajantha-bhat
Copy link
Member Author

@lirui-apache: Waiting on @aokolnychyi's feedback. I will rebase the PR today. Looks like local algorithm is efficient.

@ajantha-bhat
Copy link
Member Author

Recent Junit5 migration has caused CI failure for this PR. I will update the new testcase to junit 5 today.

@ajantha-bhat
Copy link
Member Author

3.3 CI is failed due to known flaky issue.

@ajantha-bhat
Copy link
Member Author

retriggring build due to flaky test

@ajantha-bhat
Copy link
Member Author

ping @aokolnychyi

@aokolnychyi
Copy link
Contributor

Will take a look this week.

@aokolnychyi
Copy link
Contributor

I cloned this change and played with it locally. Here are my thoughts.

  1. We should focus on the local implementation for now. I think it is going to perform OK for most use cases and doing an efficient distributed implementation would be fairly hard. Even if we come up with that, the cost of transferring the results back to the driver may overweight everything else. Let's focus on the local implementation.
  2. If we stay local, we may skip the action and provide PartitionStatsGenerator or something similar in core.
  3. It is possible that some of the snapshots will be expired by the time we compute partition stats. Therefore, we will not be able to determine the last snapshot that modified some partitions. It is OK but the algorithm should account for that.
  4. The snapshot ID is random and there may be clock skew that affects the commit timestamp. We should be relying on snapshot ordinals like in CDC scans to determine the snapshot order.
  5. Different partitions may have the same unified partition tuple but it does not make them the same. For instance, I may have a spec1 with p1=a and a spec2 with p1=a/p2=null. Their unified partition tuples are the same but we cannot squash them into one summary entry. Instead, we should persist them separately with different spec IDs. This means the algorithm should be adjusted. Keep in mind that PartitionMap is not thread-safe and cannot be used globally.
  6. The partition summaries should be sorted before they are written out (as required by the spec).

@ajantha-bhat
Copy link
Member Author

ajantha-bhat commented Apr 18, 2024

@aokolnychyi:

Please find the new PR that just works on local algorithm.
#10176

  • We should focus on the local implementation for now. I think it is going to perform OK for most use cases and doing an efficient distributed implementation would be fairly hard. Even if we come up with that, the cost of transferring the results back to the driver may overweight everything else. Let's focus on the local implementation.
  • If we stay local, we may skip the action and provide PartitionStatsGenerator or something similar in core.
  • It is possible that some of the snapshots will be expired by the time we compute partition stats. Therefore, we will not be able to determine the last snapshot that modified some partitions. It is OK but the algorithm should account for that.
  • The partition summaries should be sorted before they are written out (as required by the spec).

Done

  • The snapshot ID is random and there may be clock skew that affects the commit timestamp. We should be relying on snapshot ordinals like in CDC scans to determine the snapshot order.

This is not applicable for the current logic right? As I am using table.currentSnapshot(). This may be needed when we introduce incremental update in the future.

  • Different partitions may have the same unified partition tuple but it does not make them the same. For instance, I may have a spec1 with p1=a and a spec2 with p1=a/p2=null. Their unified partition tuples are the same but we cannot squash them into one summary entry. Instead, we should persist them separately with different spec IDs. This means the algorithm should be adjusted. Keep in mind that PartitionMap is not thread-safe and cannot be used globally.

I didn't get this point, the unified tuple design is similar to partitions metadata table. Also, if we consider the users of partition stats, they finally want to know the stats for the filter query where p1=a/p2=null, so it should be for unified tuple as query doesn't applicable only to new spec (instead whole data)?

@ajantha-bhat
Copy link
Member Author

closing this PR as we decided to move with local algorithm instead of distributive algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement spark action to compute partition stats
3 participants