Skip to content

[SPARK-52228][SS][PYSPARK] Construct the benchmark purposed TWS state server with in-memory state impls and the benchmark code in python #50952

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

HeartSaVioR
Copy link
Contributor

@HeartSaVioR HeartSaVioR commented May 20, 2025

What changes were proposed in this pull request?

This PR proposes to introduce the benchmark tool which can perform performance test with state interactions between TWS state server and Python worker.

Since it requires two processes (JVM and Python) with socket connection between the two, we are not going to follow the benchmark suites we have in SQL module as of now. We leave the tool to run manually. It'd be ideal if we can make this to be standardized with existing benchmark suites as well as running automatically, but this is not an immediate goal.

Why are the changes needed?

It has been very painful to run the benchmark and look into the performance of state interactions. It required adding debug logs and running E2E queries, which is really so much work just to see the numbers.

For example, after this benchmark tool has introduced, we can verify the upcoming improvements w.r.t. state interactions - for example, we still have spots to use Arrow in state interactions, and I think this tool can show the perf benefit for the upcoming fix.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Manually tested.

TWS Python state server

  • build Spark repo via ./dev/make-distribution.sh
  • cd dist
  • java -classpath "./jars/*" --add-opens=java.base/java.nio=org.apache.arrow.memory.core,ALL-UNNAMED org.apache.spark.sql.execution.python.streaming.BenchmarkTransformWithStateInPySparkStateServer

Python process (benchmark code)

  • cd python
  • python3 pyspark/sql/streaming/benchmark/benchmark_tws_state_server.py <port that state server use> <state type> <params if required>

For Python process, it is required to install libraries PySpark required first (including numpy since it's used in the benchmark).

Result will be printed out like following (NOTE: I ran the same benchmark 3 times):
https://gist.github.com/HeartSaVioR/fa4805af4d7a4dc9789c8e3437506be1

Was this patch authored or co-authored using generative AI tooling?

No.

… server with in-memory state impls and the benchmark code in python
@HeartSaVioR
Copy link
Contributor Author

I'm open for suggestion w.r.t package/module path, better instruction, idea of standardization, etc. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant