Skip to content

Commit

Permalink
Merge pull request #1342 from tulios/v2.0.0-release
Browse files Browse the repository at this point in the history
Release v2.0.0
  • Loading branch information
Nevon authored May 6, 2022
2 parents 3323145 + 707827b commit 7f70dea
Show file tree
Hide file tree
Showing 8 changed files with 242 additions and 8 deletions.
30 changes: 30 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,36 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).

## [2.0.0] - 2022-05-06

This is the first major version released in 4 years, and contains a few important breaking changes. **A [migration guide](https://kafka.js.org/docs/migration-guide-v2.0.0) has been prepared to help with the migration process.** Be sure to read it before upgrading from older versions of KafkaJS.

### Added
- Validate configEntries when creating topics #1309
- New `topics` argument for `consumer.subscribe` to subscribe to multiple topics #1313
- Support duplicate header keys #1132

### Removed
- **BREAKING:** Drop support for Node 10 and 12 #1333
- **BREAKING:** Remove deprecated enum `ResourceTypes` #1334
- **BREAKING:** Remove deprecated argument `topic` from `admin.fetchOffsets` #1335
- **BREAKING:** Remove deprecated method `getTopicMetadata` from admin client #1336
- **BREAKING:** Remove typo type `TopicPartitionOffsetAndMedata` #1338
- **BREAKING:** Remove deprecated error property originalError. Replaced by `cause` #1341

### Changed
- **BREAKING:** Change default partitioner to Java compatible #1339
- Improve consumer performance #1258
- **BREAKING:** Enforce request timeout by default #1337
- Honor default replication factor and partition count when creating topics #1305
- Increase default authentication timeout to 10 seconds #1340

### Fixed
- Fix invalid sequence numbers when producing concurrently with idempotent producer #1050 #1172
- Fix correlation id and sequence number overflow #1310
- Fix consumer not restarting on retriable connection errors #1304
- Avoid endless sleep loop #1323

## [1.16.0] - 2022-02-09

### Added
Expand Down
4 changes: 0 additions & 4 deletions docs/Admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,10 +105,6 @@ await admin.createPartitions({
| count | New partition count, mandatory | |
| assignments | Assigned brokers for each new partition | null |

## <a name="get-topic-metadata"></a> Get topic metadata

Deprecated, see [Fetch topic metadata](#fetch-topic-metadata)

## <a name="fetch-topic-metadata"></a> Fetch topic metadata

```javascript
Expand Down
10 changes: 10 additions & 0 deletions docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,16 @@ new Kafka({
})
```

The request timeout can be disabled by setting `enforceRequestTimeout` to `false`.

```javascript
new Kafka({
clientId: 'my-app',
brokers: ['kafka1:9092', 'kafka2:9092'],
enforceRequestTimeout: false
})
```

## Default Retry

The `retry` option can be used to set the configuration of the retry mechanism, which is used to retry connections and API calls to Kafka (when using producers or consumers).
Expand Down
191 changes: 191 additions & 0 deletions docs/MigrationGuide-2-0-0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
---
id: migration-guide-v2.0.0
title: Migrating to v2.0.0
---

v2.0.0 is the first major version of KafkaJS released since 2018. For most users, the required changes in order to upgrade from 1.x.x are very minor, but it is still important to read through the list of changes to know what, if any, changes need to be made.

## Producer: New default partitioner

> 🚨&nbsp; **Important!** 🚨
>
> Not selecting the right partitioner will cause messages to be produced to different partitions than in versions previous to 2.0.0.
The default partitioner distributes messages consistently based on a hash of the message `key`. v1.8.0 introduced a new partitioner called `JavaCompatiblePartitioner` that behaves the same way, but fixes a bug where in some circumstances a message with the same key would be distributed to different partitions when produced with KafkaJS and the Java client.

**In v2.0.0 the following changes have been made**:

* `JavaCompatiblePartitioner` is renamed `DefaultPartitioner`
* The partitioner previously called `JavaCompatiblePartitioner` is selected as the default partitioner if no partitioner is configured.
* The old `DefaultPartitioner` is renamed `LegacyPartitioner`

If no partitioner is selected when creating the producer, a warning will be logged. This warning can be silenced either by specifying a partitioner to use or by setting the environment variable `KAFKAJS_NO_PARTITIONER_WARNING`. This warning will be removed in a future version.

### What do I need to do?

What you need to do depends on what partitioner you were previously using and whether or not co-partitioning is important to you.

#### "I was previously using the default partitioner and I want to keep the same behavior"

Import the `LegacyPartitioner` and configure your producer to use it:

```js
const { Partitioners } = require('kafkajs')
kafka.producer({ createPartitioner: Partitioners.LegacyPartitioner })
```

#### "I was previously using the `JavaCompatiblePartitioner` and I want to keep that behavior"

The new `DefaultPartitioner` is re-exported as `JavaCompatiblePartitioner`, so existing code will continue to work. However, that export will be removed in a future version, so it's recommended to either remove the partitioner from the configuration or explicitly configure it to use what is now the default partitioner:

```js
// Rely on the default partitioner being compatible with the Java partitioner
kafka.producer()

// Or explicitly use the default partitioner
const { Partitioners } = require('kafkajs')
kafka.producer({ createPartitioner: Partitioners.DefaultPartitioner })
```

#### "I use a custom partitioner"

No need to do anything unless you are using either of the two built-in partitioners.

#### "It's not important to me that messages with the same key end up in the same partition as in previous versions"

Use the new default partitioner.

```js
kafka.producer()
```

## Request timeouts enabled

v1.5.1 added a request timeout mechanism. Due to some issues with the initial implementation, this was not enabled by default, but could be enabled using the undocumented `enforceRequestTimeout` flag. The issues have long since been ironed out and request timeout enforcement is is now enabled by default in v2.0.0.

The request timeout mechanism can be disabled like so:

```javascript
new Kafka({ enforceRequestTimeout: false })
```

See [Request Timeout](/docs/2.0.0/configuration#request-timeout) for more details.

## Consumer: Supporting duplicate header keys

If a message has more than one header value for the same key, previous versions of KafkaJS would discard all but one of the values. Now, it instead returns each value as an array.

```js
/**
* Given a message like this:
* {
* headers: {
* event: "birthday",
* participants: "Alice",
* participants: "Bob"
* }
* }
*/

// Before
> message.headers
{
event: <Buffer 62 69 72 74 68 64 61 79>,
participants: <Buffer 42 6f 62>
}

// After
> message.headers
{
event: <Buffer 62 69 72 74 68 64 61 79>,
participants: [
<Buffer 41 6c 69 63 65>,
<Buffer 42 6f 62>
]
}
```

Adapt your code by handling header values potentially being arrays:

```js
// Before
const participants = message.headers["participants"].toString()

// After
const participants = Array.isArray(message.headers["participants"])
? message.headers["participants"].map(participant => participant.toString()).join(", ")
: message.headers["participants"].toString()
```

## Admin: `getTopicMetadata` removed

The `getTopicMetadata` method of the admin client has been replaced by `fetchTopicMetadata`. `getTopicMetadata` had limitations that did not allow it to get metadata for all topics in the cluster.

See [Fetch Topic Metadata](/docs/2.0.0/admin#a-name-fetch-topic-metadata-a-fetch-topic-metadata) for details.

## Admin: `fetchOffsets` accepts `topics` instead of `topic`

`fetchOffsets` used to only be able to fetch offsets for a single topic, but now it can fetch for multiple topics.

To adapt your current code, pass in an array of `topics` instead of a single `topic` string, and handle the promise resolving to an array with each item being an object with a topic and an array of partition-offsets.

```js
// Before
const partitions = await admin.fetchOffsets({ groupId, topic: 'topic-a' })
for (const { partition, offset } of partitions) {
admin.logger().info(`${groupId} is at offset ${offset} of partition ${partition}`)
}

// After
const topics = await admin.fetchOffsets({ groupId, topics: ['topic-a', 'topic-b'] })
for (const topic of topics) {
for (const { partition, offset } of partitions) {
admin.logger().info(`${groupId} is at offset ${offset} of ${topic}:${partition}`)
}
}
```

## Removed support for Node 10 and 12

KafkaJS supports all currently supported versions of NodeJS. If you are currently using NodeJS 10 or 12, you will get a warning when installing KafkaJS, and there is no guarantee that it will function. We **strongly** encourage you to upgrade to a supported, secure version of NodeJS.

## `originalError` property replaced with `cause`

Some errors that are triggered by other errors, such as `KafkaJSNumberOfRetriesExceeded`, used to have a property called `originalError` that contained a reference to the cause. This has been renamed `cause` to closer align with the [Error Cause](https://tc39.es/proposal-error-cause/) specification.

## Typescript: `ResourceTypes` replaced by `AclResourceTypes` and `ConfigResourceTypes`

The `ResourceTypes` enum has been split into `AclResourceTypes` and `ConfigResourceTypes`. The enum values happened to be the same for the two, even though they were actually unrelated to each other.

To migrate, simply import `ConfigResourceTypes` instead of `ResourceTypes` when operating on configs, and `AclResourceTypes` when operating on ACLs.

```ts
// Before
import { ResourceTypes } from 'kafkajs'
await admin.describeConfigs({
includeSynonyms: false,
resources: [
{
type: ResourceTypes.TOPIC,
name: 'topic-name'
}
]
})

// After
const { ConfigResourceTypes } = require('kafkajs')

await admin.describeConfigs({
includeSynonyms: false,
resources: [
{
type: ConfigResourceTypes.TOPIC,
name: 'topic-name'
}
]
})
```

## Typescript: `TopicPartitionOffsetAndMedata` removed

Use `TopicPartitionOffsetAndMetadata` instead.
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "kafkajs",
"version": "1.16.0",
"version": "2.0.0",
"description": "A modern Apache Kafka client for node.js",
"author": "Tulio Ornelas <ornelas.tulio@gmail.com>",
"main": "index.js",
Expand Down
6 changes: 3 additions & 3 deletions src/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ const DEFAULT_METADATA_MAX_AGE = 300000
const warnOfDefaultPartitioner = once(logger => {
if (process.env.KAFKAJS_NO_PARTITIONER_WARNING == null) {
logger.warn(
`KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option "createPartitioner: Partitioners.LegacyPartitioner". See ${websiteUrl(
'docs/producing',
'default-partitioners'
`KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option "createPartitioner: Partitioners.LegacyPartitioner". See the migration guide at ${websiteUrl(
'docs/migration-guide-v2.0.0',
'producer-new-default-partitioner'
)} for details. Silence this warning by setting the environment variable "KAFKAJS_NO_PARTITIONER_WARNING=1"`
)
}
Expand Down
4 changes: 4 additions & 0 deletions website/i18n/en.json
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,9 @@
"title": "A Brief Intro to Kafka",
"sidebar_label": "Intro to Kafka"
},
"migration-guide-v2.0.0": {
"title": "Migrating to v2.0.0"
},
"pre-releases": {
"title": "Pre-releases"
},
Expand Down Expand Up @@ -331,6 +334,7 @@
"Usage": "Usage",
"Examples": "Examples",
"API Reference": "API Reference",
"Migration Guides": "Migration Guides",
"Developing KafkaJS": "Developing KafkaJS"
}
},
Expand Down
3 changes: 3 additions & 0 deletions website/sidebars.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@
"consumer-example"
],
"API Reference": [],
"Migration Guides": [
"migration-guide-v2.0.0"
],
"Developing KafkaJS": [
"contribution-guide",
"development-environment",
Expand Down

0 comments on commit 7f70dea

Please sign in to comment.