Skip to content

Commit 9e492b7

Browse files
HyukjinKwondongjoon-hyun
authored andcommitted
[SPARK-45963][SQL][DOCS][3.5] Restore documentation for DSv2 API
This PR cherry-picks #43855 to branch-3.5 --- ### What changes were proposed in this pull request? This PR restores the DSv2 documentation. #38392 mistakenly added `org/apache/spark/sql/connect` as a private that includes `org/apache/spark/sql/connector`. ### Why are the changes needed? For end users to read DSv2 documentation. ### Does this PR introduce _any_ user-facing change? Yes, it restores the DSv2 API documentation that used to be there https://spark.apache.org/docs/3.3.0/api/scala/org/apache/spark/sql/connector/catalog/index.html ### How was this patch tested? Manually tested via: ``` SKIP_PYTHONDOC=1 SKIP_RDOC=1 SKIP_SQLDOC=1 bundle exec jekyll build ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #43865 from HyukjinKwon/SPARK-45963-3.5. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
1 parent 01eb6c8 commit 9e492b7

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

project/SparkBuild.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1401,7 +1401,7 @@ object Unidoc {
14011401
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/io")))
14021402
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/kvstore")))
14031403
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/catalyst")))
1404-
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect")))
1404+
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect/")))
14051405
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/execution")))
14061406
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/internal")))
14071407
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/hive")))

sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,8 +58,8 @@ public interface SupportsMetadataColumns extends Table {
5858
* Determines how this data source handles name conflicts between metadata and data columns.
5959
* <p>
6060
* If true, spark will automatically rename the metadata column to resolve the conflict. End users
61-
* can reliably select metadata columns (renamed or not) with {@link Dataset.metadataColumn}, and
62-
* internal code can use {@link MetadataAttributeWithLogicalName} to extract the logical name from
61+
* can reliably select metadata columns (renamed or not) with {@code Dataset.metadataColumn}, and
62+
* internal code can use {@code MetadataAttributeWithLogicalName} to extract the logical name from
6363
* a metadata attribute.
6464
* <p>
6565
* If false, the data column will hide the metadata column. It is recommended that Table

sql/catalyst/src/test/scala/org/apache/spark/sql/connector/catalog/InMemoryBaseTable.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -619,9 +619,9 @@ class BufferedRows(val key: Seq[Any] = Seq.empty) extends WriterCommitMessage
619619
}
620620

621621
/**
622-
* Theoretically, [[InternalRow]] returned by [[HasPartitionKey#partitionKey()]]
622+
* Theoretically, `InternalRow` returned by `HasPartitionKey#partitionKey()`
623623
* does not need to implement equal and hashcode methods.
624-
* But [[GenericInternalRow]] implements equals and hashcode methods already. Here we override it
624+
* But `GenericInternalRow` implements equals and hashcode methods already. Here we override it
625625
* to simulate that it has not been implemented to verify codes correctness.
626626
*/
627627
case class PartitionInternalRow(keys: Array[Any])

0 commit comments

Comments
 (0)