Skip to content

MINOR; Update scalafmt to latest version #12475

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 12, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ buildscript {
}

plugins {
id 'com.diffplug.spotless' version '5.12.5'
id 'com.diffplug.spotless' version '6.10.0'
id 'com.github.ben-manes.versions' version '0.42.0'
id 'idea'
id 'java-library'
Expand All @@ -47,7 +47,7 @@ plugins {
spotless {
scala {
target 'streams/**/*.scala'
scalafmt("$versions.scalafmt").configFile('checkstyle/.scalafmt.conf')
scalafmt("$versions.scalafmt").configFile('checkstyle/.scalafmt.conf').scalaMajorVersion(versions.baseScala)
licenseHeaderFile 'checkstyle/java.header', 'package'
}
}
Expand Down
5 changes: 4 additions & 1 deletion checkstyle/.scalafmt.conf
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docstrings = JavaDoc
version = 3.5.9
runner.dialect = scala213
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still support scala 2.12 in this project.

By specifying the dialect as 2.13, would we fail where syntax for 2.12 is used? If yes, are there any such cases in the project? (The build should tell us that after the existing spotless check passes).

Copy link
Contributor Author

@mdedetrich mdedetrich Aug 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Scala 2.13 dialect is backwards compatible with 2.12. I have applied the dialect when updating from scalafmt 2 to scalafmt 3 with various popular OSS Scala projects and there isn't any issues (in any case the Scala code that is used im Kafka is quite conservative)

I have verified that the style guide we follow for this project is not prescriptive about formatting of the docs. Hence, your change should be ok.

Do you think it makes sense to format the docs or should I leave the PR as is?

Note that build is failing related to changes in this PR. We would probably need to fix the error but I think I might have already fixed it in MINOR: Catch InvocationTargetException explicitly and propagate underlying cause #12230 If that lands, then the check should pass here.

In that case it's sensible to wait for your PR to land and then I can rebase.

Copy link
Member

@divijvaidya divijvaidya Aug 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it makes sense to format the docs or should I leave the PR as is?

I think we should leave the PR as is. Converting the docs in the entire project will be cumbersome without any benefit.

In that case it's sensible to wait for your PR to land and then I can rebase.

Thanks, in that case, would you kindly review that PR please if you get some time? It's been pending for a while now waiting to get some attention.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, in that case, would you kindly review that PR please if you get some time? It's been pending for a while now waiting to get some attention.

I just landed a review of the PR.

docstrings.style = Asterisk
docstrings.wrap = false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This forces scalafmt to ignore docstrings when formatting code. The new version of scalafmt works on docstrings however it doesn't support manual html formatting which is done in kafka scala streams.

maxColumn = 120
continuationIndent.defnSite = 2
assumeStandardLibraryStripMargin = true
Expand Down
5 changes: 4 additions & 1 deletion gradle/dependencies.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,10 @@ versions += [
reload4j: "1.2.19",
rocksDB: "6.29.4.1",
scalaCollectionCompat: "2.6.0",
scalafmt: "2.7.5",
// When updating the scalafmt version please also update the version field in checkstyle/.scalafmt.conf. scalafmt now
// has the version field as mandatory in its configuration, see
// https://github.com/scalameta/scalafmt/releases/tag/v3.1.0.
scalafmt: "3.5.9",
scalaJava8Compat : "1.0.2",
scoverage: "1.4.11",
slf4j: "1.7.36",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,8 @@ import java.lang.{Iterable => JIterable}
import org.apache.kafka.streams.processor.ProcessorContext

/**
* Implicit classes that offer conversions of Scala function literals to
* SAM (Single Abstract Method) objects in Java. These make the Scala APIs much
* more expressive, with less boilerplate and more succinct.
* Implicit classes that offer conversions of Scala function literals to SAM (Single Abstract Method) objects in Java.
* These make the Scala APIs much more expressive, with less boilerplate and more succinct.
*/
private[scala] object FunctionsCompatConversions {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,22 +19,21 @@ package org.apache.kafka.streams.scala
import org.apache.kafka.common.serialization.Serde
import org.apache.kafka.streams.KeyValue
import org.apache.kafka.streams.kstream.{
KStream => KStreamJ,
CogroupedKStream => CogroupedKStreamJ,
KGroupedStream => KGroupedStreamJ,
TimeWindowedKStream => TimeWindowedKStreamJ,
KGroupedTable => KGroupedTableJ,
KStream => KStreamJ,
KTable => KTableJ,
SessionWindowedCogroupedKStream => SessionWindowedCogroupedKStreamJ,
SessionWindowedKStream => SessionWindowedKStreamJ,
CogroupedKStream => CogroupedKStreamJ,
TimeWindowedCogroupedKStream => TimeWindowedCogroupedKStreamJ,
SessionWindowedCogroupedKStream => SessionWindowedCogroupedKStreamJ,
KTable => KTableJ,
KGroupedTable => KGroupedTableJ
TimeWindowedKStream => TimeWindowedKStreamJ
}
import org.apache.kafka.streams.processor.StateStore
import org.apache.kafka.streams.scala.kstream._

/**
* Implicit conversions between the Scala wrapper objects and the underlying Java
* objects.
* Implicit conversions between the Scala wrapper objects and the underlying Java objects.
*/
object ImplicitConversions {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ package org.apache.kafka.streams.scala
package kstream

import org.apache.kafka.streams.kstream.{
CogroupedKStream => CogroupedKStreamJ,
SessionWindows,
SlidingWindows,
Window,
Windows,
CogroupedKStream => CogroupedKStreamJ
Windows
}
import org.apache.kafka.streams.scala.FunctionsCompatConversions.{AggregatorFromFunction, InitializerFromFunction}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ package kstream
import org.apache.kafka.streams.kstream.internals.KTableImpl
import org.apache.kafka.streams.scala.serialization.Serdes
import org.apache.kafka.streams.kstream.{
KGroupedStream => KGroupedStreamJ,
KTable => KTableJ,
SessionWindows,
SlidingWindows,
Window,
Windows,
KGroupedStream => KGroupedStreamJ,
KTable => KTableJ
Windows
}
import org.apache.kafka.streams.scala.FunctionsCompatConversions.{
AggregatorFromFunction,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ import org.apache.kafka.streams.KeyValue
import org.apache.kafka.streams.kstream.{
GlobalKTable,
JoinWindows,
KStream => KStreamJ,
Printed,
TransformerSupplier,
ValueTransformerSupplier,
ValueTransformerWithKeySupplier,
KStream => KStreamJ
ValueTransformerWithKeySupplier
}
import org.apache.kafka.streams.processor.TopicNameExtractor
import org.apache.kafka.streams.processor.api.{FixedKeyProcessorSupplier, ProcessorSupplier}
Expand Down Expand Up @@ -334,7 +334,7 @@ class KStream[K, V](val inner: KStreamJ[K, V]) {
* @see `org.apache.kafka.streams.kstream.KStream#branch`
* @deprecated since 2.8. Use `split` instead.
*/
//noinspection ScalaUnnecessaryParentheses
// noinspection ScalaUnnecessaryParentheses
@deprecated("use `split()` instead", "2.8")
def branch(predicates: ((K, V) => Boolean)*): Array[KStream[K, V]] =
inner.branch(predicates.map(_.asPredicate): _*).map(kstream => new KStream(kstream))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ package org.apache.kafka.streams.scala
package kstream

import org.apache.kafka.common.utils.Bytes
import org.apache.kafka.streams.kstream.{TableJoined, ValueJoiner, ValueTransformerWithKeySupplier, KTable => KTableJ}
import org.apache.kafka.streams.kstream.{KTable => KTableJ, TableJoined, ValueJoiner, ValueTransformerWithKeySupplier}
import org.apache.kafka.streams.scala.FunctionsCompatConversions.{
FunctionFromFunction,
KeyValueMapperFromFunction,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import java.nio.ByteBuffer
import java.util
import java.util.UUID

import org.apache.kafka.common.serialization.{Deserializer, Serde, Serializer, Serdes => JSerdes}
import org.apache.kafka.common.serialization.{Deserializer, Serde, Serdes => JSerdes, Serializer}
import org.apache.kafka.streams.kstream.WindowedSerdes

object Serdes extends LowPrioritySerdes {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,24 @@ import org.apache.kafka.streams.kstream.{
Aggregator,
Initializer,
JoinWindows,
KeyValueMapper,
Reducer,
Transformer,
ValueJoiner,
ValueMapper,
KGroupedStream => KGroupedStreamJ,
KStream => KStreamJ,
KTable => KTableJ,
KeyValueMapper,
Materialized => MaterializedJ,
StreamJoined => StreamJoinedJ
Reducer,
StreamJoined => StreamJoinedJ,
Transformer,
ValueJoiner,
ValueMapper
}
import org.apache.kafka.streams.processor.{api, ProcessorContext}
import org.apache.kafka.streams.processor.api.{Processor, ProcessorSupplier}
import org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.serialization.{Serdes => NewSerdes}
import org.apache.kafka.streams.scala.serialization.Serdes._
import org.apache.kafka.streams.scala.kstream._
import org.apache.kafka.streams.{KeyValue, StreamsConfig, TopologyDescription, StreamsBuilder => StreamsBuilderJ}
import org.apache.kafka.streams.{KeyValue, StreamsBuilder => StreamsBuilderJ, StreamsConfig, TopologyDescription}
import org.junit.jupiter.api.Assertions._
import org.junit.jupiter.api._

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ class KStreamTest extends TestDriver {
testDriver.close()
}

//noinspection ScalaDeprecation
// noinspection ScalaDeprecation
@Test
def testJoinCorrectlyRecords(): Unit = {
val builder = new StreamsBuilder()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ package org.apache.kafka.streams.scala.kstream
import org.apache.kafka.streams.kstream.Suppressed.BufferConfig
import org.apache.kafka.streams.kstream.{
Named,
SlidingWindows,
SessionWindows,
SlidingWindows,
Suppressed => JSuppressed,
TimeWindows,
Windowed,
Suppressed => JSuppressed
Windowed
}
import org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.serialization.Serdes._
Expand Down