Skip to content

Commit a7147c8

Browse files
HyukjinKwondongjoon-hyun
authored andcommitted
[SPARK-45963][SQL][DOCS] Restore documentation for DSv2 API
### What changes were proposed in this pull request? This PR restores the DSv2 documentation. #38392 mistakenly added `org/apache/spark/sql/connect` as a private that includes `org/apache/spark/sql/connector`. ### Why are the changes needed? For end users to read DSv2 documentation. ### Does this PR introduce _any_ user-facing change? Yes, it restores the DSv2 API documentation that used to be there https://spark.apache.org/docs/3.3.0/api/scala/org/apache/spark/sql/connector/catalog/index.html ### How was this patch tested? Manually tested via: ``` SKIP_PYTHONDOC=1 SKIP_RDOC=1 SKIP_SQLDOC=1 bundle exec jekyll build ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #43855 from HyukjinKwon/connector-docs. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
1 parent a5fe85f commit a7147c8

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

project/SparkBuild.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1361,7 +1361,7 @@ object Unidoc {
13611361
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/io")))
13621362
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/kvstore")))
13631363
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/catalyst")))
1364-
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect")))
1364+
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect/")))
13651365
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/execution")))
13661366
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/internal")))
13671367
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/hive")))

sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,8 +58,8 @@ public interface SupportsMetadataColumns extends Table {
5858
* Determines how this data source handles name conflicts between metadata and data columns.
5959
* <p>
6060
* If true, spark will automatically rename the metadata column to resolve the conflict. End users
61-
* can reliably select metadata columns (renamed or not) with {@link Dataset.metadataColumn}, and
62-
* internal code can use {@link MetadataAttributeWithLogicalName} to extract the logical name from
61+
* can reliably select metadata columns (renamed or not) with {@code Dataset.metadataColumn}, and
62+
* internal code can use {@code MetadataAttributeWithLogicalName} to extract the logical name from
6363
* a metadata attribute.
6464
* <p>
6565
* If false, the data column will hide the metadata column. It is recommended that Table

sql/catalyst/src/main/scala/org/apache/spark/sql/connector/expressions/expressions.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ private[sql] object BucketTransform {
156156
}
157157

158158
/**
159-
* This class represents a transform for [[ClusterBySpec]]. This is used to bundle
159+
* This class represents a transform for `ClusterBySpec`. This is used to bundle
160160
* ClusterBySpec in CreateTable's partitioning transforms to pass it down to analyzer.
161161
*/
162162
final case class ClusterByTransform(

0 commit comments

Comments
 (0)