Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new ppl-spark logical plan translation #31

Closed
wants to merge 26 commits into from

Conversation

YANG-DB
Copy link
Member

@YANG-DB YANG-DB commented Sep 6, 2023

Description

This is the initial work on spark capability to execute PPL Queries natively

Implementation Details

1. Using Spark Connect (PPL Grammar To dataframe API Translation)

In Apache Spark 3.4, Spark Connect introduced a decoupled client-server architecture that allows remote connectivity to Spark clusters using the DataFrame API and unresolved logical plans as the protocol.

How Spark Connect works:
The Spark Connect client library is designed to simplify Spark application development. It is a thin API that can be embedded everywhere: in application servers, IDEs, notebooks, and programming languages. The Spark Connect API builds on Spark’s DataFrame API using unresolved logical plans as a language-agnostic protocol between the client and the Spark driver.

The Spark Connect client translates DataFrame operations into unresolved logical query plans which are encoded using protocol buffers. These are sent to the server using the gRPC framework.
The Spark Connect endpoint embedded on the Spark Server receives and translates unresolved logical plans into Spark’s logical plan operators. This is similar to parsing a SQL query, where attributes and relations are parsed and an initial parse plan is built. From there, the standard Spark execution process kicks in, ensuring that Spark Connect leverages all of Spark’s optimizations and enhancements. Results are streamed back to the client through gRPC as Apache Arrow-encoded row batches.

Advantages :
Stability: Applications that use too much memory will now only impact their own environment as they can run in their own processes. Users can define their own dependencies on the client and don’t need to worry about potential conflicts with the Spark driver.

Upgradability: The Spark driver can now seamlessly be upgraded independently of applications, for example to benefit from performance improvements and security fixes. This means applications can be forward-compatible, as long as the server-side RPC definitions are designed to be backwards compatible.

Debuggability and observability: Spark Connect enables interactive debugging during development directly from your favorite IDE. Similarly, applications can be monitored using the application’s framework native metrics and logging libraries.

Not need separating PPL into a dedicated library - all can be done from the existing SQL repository.

Disadvantages :
Not all managed Spark solution support this "new" feature so as part of using this capability we will need to manually deploy the corresponding spark-connect plugins as part of flint’s deployment.

All the context creation would have to be done from the spark client - this creates some additional complexity since the Flint spark plugin has some contextual requirements that have to be somehow propagated from the client’s side .


Selected solution

As presented here and detailed in the issue, there are several options to allow spark to be able to understand and run ppl queries.
The selected option is to us the PPL AST logical plan API and traversals to transform the PPL logical plan into Catalyst logical plan an thus enabling :

  • reuse of existing PPL code that is tested and in production
  • simplify development while relying on well known and structured codebase
  • long term support in case the spark-connect will become user chosen strategy - existing code can be used without any changes
  • single place of maintenance by reusing the PPL logical model which relies on ppl antlr parser, we can use a single repository to maintain and develop the PPL language without the need to constantly merge changes upstream .

The following diagram shows the high level architecture of the selected implementation solution :

ppl logical architecture

The logical Architecture show the next artifacts:

  • Libraries:

    • PPL ( the ppl core , protocol, parser & logical plan utils)
    • SQL ( the SQL core , protocol, parser - depends on PPL for using the logical plan utils)
  • Drivers:

    • PPL OpenSearch Driver (depends on OpenSearch core)
    • PPL Spark Driver (depends on Spark core)
    • PPL Prometheus Driver (directly translates PPL to PromQL )
    • SQL OpenSearch Driver (depends on OpenSearch core)

Physical Architecture :
Currently the drivers reside inside the PPL client repository within the OpenSearch Plugins.
Next tasks ahead will resolve this:

  • Extract PPL logical component outside the SQL plugin into a (none-plugin) library - publish library to maven
  • Separate the PPL / SQL drivers inside the OpenSearch PPL client to better distinguish
  • Create a thin PPL client capable of interaction with the PPL Driver regardless of which driver (Spark , OpenSearch , Prometheus )

Issues Resolved

#30

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

…se.yml

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
…se.yml

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
…se.yml

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
…se.yml

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
add ppl statement logical plan elements
add ppl parser components
add ppl expressions components

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Copy link
Collaborator

@dai-chen dai-chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check if you can split it into smaller PR for reviewers' convenience. Thanks!

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
 -  source = $testTable
 -  source = $testTable | fields name, age
 -  source = $testTable age=25 | fields name, age

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
Signed-off-by: YANGDB <yang.db.dev@gmail.com>
add AggregateFunction translation & tests
remove unused DSL builder

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
# Conflicts:
#	build.sbt
#	flint-spark-integration/src/test/scala/org/opensearch/flint/spark/skipping/FlintSparkSkippingIndexSuite.scala
#	ppl-spark-integration/README.md
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ast/expression/Span.java
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ppl/CatalystPlanContext.java
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ppl/CatalystQueryPlanVisitor.java
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ppl/utils/AggregatorTranslator.java
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ppl/utils/ComparatorTransformer.java
#	ppl-spark-integration/src/main/java/org/opensearch/sql/ppl/utils/DataTypeTransformer.java
#	ppl-spark-integration/src/main/scala/org/opensearch/flint/spark/FlintPPLSparkExtensions.scala
#	ppl-spark-integration/src/main/scala/org/opensearch/flint/spark/ppl/FlintSparkPPLParser.scala
#	ppl-spark-integration/src/main/scala/org/opensearch/flint/spark/ppl/PPLSyntaxParser.scala
#	ppl-spark-integration/src/test/scala/org/opensearch/flint/spark/ppl/LogicalPlanTestUtils.scala
#	ppl-spark-integration/src/test/scala/org/opensearch/flint/spark/ppl/PPLLogicalPlanAggregationQueriesTranslatorTestSuite.scala
#	ppl-spark-integration/src/test/scala/org/opensearch/flint/spark/ppl/PPLLogicalPlanFiltersTranslatorTestSuite.scala
…rk-flint tests

Signed-off-by: YANGDB <yang.db.dev@gmail.com>
@YANG-DB YANG-DB closed this Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants