Build: Apply spotless for scala code#8023
Conversation
| # specific language governing permissions and limitations | ||
| # under the License. | ||
|
|
||
| align = none |
There was a problem hiding this comment.
This is modified from spark
| } | ||
| danglingParentheses.preset = false | ||
| docstrings.style = Asterisk | ||
| maxColumn = 100 |
There was a problem hiding this comment.
Set to 100 to align java code.
There was a problem hiding this comment.
just curious, what's the default value? For Java the default is mandated by the Google Java Format and set to 100
There was a problem hiding this comment.
The default is 80, docs at here: https://scalameta.org/scalafmt/docs/configuration.html#maxcolumn
.../scala/org/apache/spark/sql/catalyst/parser/extensions/IcebergSparkSqlExtensionsParser.scala
Outdated
Show resolved
Hide resolved
...-extensions/src/main/scala/org/apache/spark/sql/execution/datasources/v2/MergeIntoExec.scala
Outdated
Show resolved
Hide resolved
...sions/src/main/scala/org/apache/spark/sql/catalyst/analysis/AssignmentAlignmentSupport.scala
Outdated
Show resolved
Hide resolved
|
@Fokko are we going to force styles on scala as well? not sure why this isn't done when enforcing styles on java. |
|
Also cc @jackye1995 @amogh-jahagirdar who have disscuessed this at #6736 |
|
@ConeyLiu I think this is a great topic to mention on the DEV mailist list & Iceberg Sync, so that people are aware about this effort |
|
Thanks @nastra, raised the discussion at the dev mail list. |
|
LGTM now, I'd suggest let's get this style in early. |
|
Just linking the conclusion here as I was searching for it. |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
|
@ConeyLiu are you interested in continuing work on this? |
|
Sure, I will rebase it these days. |
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
@nastra Does this need a separate commit or rebased into the first commit 'Add scalafmt and config for spotless' |
Done it. |
I think adding this into the first commit should be fine |
|
|
||
| version = 3.9.7 | ||
|
|
||
| align = none |
There was a problem hiding this comment.
for learning, why do we deviate for [Imports]
| output: Seq[Attribute], | ||
| procedure: Procedure, | ||
| input: InternalRow) extends LeafV2CommandExec { | ||
| case class CallExec(output: Seq[Attribute], procedure: Procedure, input: InternalRow) |
There was a problem hiding this comment.
nit: can we keep the old one to be more like Spark and be more readable (or does it violate scalafmt because of maxColumns?)
| import scala.jdk.CollectionConverters._ | ||
|
|
||
| class IcebergSparkSqlExtensionsParser(delegate: ParserInterface) extends ParserInterface with ExtendedParser { | ||
| class IcebergSparkSqlExtensionsParser(delegate: ParserInterface) |
There was a problem hiding this comment.
The import org.apache.iceberg.spark.procedures.SparkProcedures is not used in this class. Should it be removed by spotless?
|
@singhpk234 Based on the issue and PR in Spotless, the unused imports aren’t handled. We need a separate Scalafix rule to take care of that. |
|
@ConeyLiu where are we with this PR? Could you please rebase it so that we can get it in? |
|
I can rebase it today or tomorrow. |
a495530 to
4633c41
Compare
kevinjqliu
left a comment
There was a problem hiding this comment.
LGTM, thanks for adding this.
i tried this out locally. Looks like the spark scalafmt has been modified, maybe we should align with the new configs
| // Configure different scalafmt rules for specific Scala version | ||
| if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.13")) { | ||
| scala { | ||
| target 'src/main/scala/**/*.scala', 'src/test/scala/**/*.scala', 'src/testFixtures/scala/**/*.scala' | ||
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala213fmt.conf") | ||
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | ||
| } | ||
| } else if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.12")) { | ||
| scala { | ||
| target 'src/main/scala/**/*.scala', 'src/test/scala/**/*.scala', 'src/testFixtures/scala/**/*.scala' | ||
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala212fmt.conf") | ||
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | ||
| } | ||
| } |
There was a problem hiding this comment.
| // Configure different scalafmt rules for specific Scala version | |
| if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.13")) { | |
| scala { | |
| target 'src/main/scala/**/*.scala', 'src/test/scala/**/*.scala', 'src/testFixtures/scala/**/*.scala' | |
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala213fmt.conf") | |
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | |
| } | |
| } else if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.12")) { | |
| scala { | |
| target 'src/main/scala/**/*.scala', 'src/test/scala/**/*.scala', 'src/testFixtures/scala/**/*.scala' | |
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala212fmt.conf") | |
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | |
| } | |
| } | |
| // Configure different scalafmt rules for specific Scala version | |
| if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.13")) { | |
| scala { | |
| target 'src/**/*.scala' | |
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala213fmt.conf") | |
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | |
| } | |
| } else if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.12")) { | |
| scala { | |
| target 'src/**/*.scala' | |
| scalafmt("3.9.7").configFile("$rootDir/.baseline/scala/.scala212fmt.conf") | |
| licenseHeaderFile "$rootDir/.baseline/copyright/copyright-header-java.txt", "package" | |
| } | |
| } |
nit capture all .scala files in case more folders are added in the future
|
|
||
| version = 3.9.7 | ||
|
|
||
| align = none |
There was a problem hiding this comment.
rewrite.rules
has been removed from https://github.com/apache/spark/blob/f5b9ea8103dd1f37c6f0cea0692d7bc5b50b778c/dev/.scalafmt.conf
the only difference now is version (3.8.6 vs 3.9.7) and maxColumn (98 vs 100) and docstrings.wrap
4633c41 to
64a6733
Compare
| } | ||
|
|
||
| // Configure different scalafmt rules for specific Scala version | ||
| if (project.name.startsWith("iceberg-spark") && project.name.endsWith("2.13")) { |
There was a problem hiding this comment.
Do we need to check iceberg-spark? It only affects the Scala code, right? Also, should we get the scalaVersion from the System.getProperty rather than from the project name?
There was a problem hiding this comment.
I'll follow up on this these days
|
Thank you for the PR @ConeyLiu and thanks everyone for the review. |

Closes #7695.