Skip to content

release prep for 2.0 #250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 8, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Splunk Connect for Kafka is a Kafka Connect Sink for Splunk with the following f
1. [Start](https://kafka.apache.org/quickstart) your Kafka Cluster and confirm it is running.
2. If this is a new install, create a test topic (eg: `perf`). Inject events into the topic. This can be done using [Kafka data-gen-app](https://github.com/dtregonning/kafka-data-gen) or the Kafka-bundled [kafka-console-producer](https://kafka.apache.org/quickstart#quickstart_send).
3. Within your Kafka Connect deployment adjust the values for `bootstrap.servers` and `plugin.path` inside the `$KAFKA_HOME/config/connect-distributed.properties` file. `bootstrap.servers` should be configured to point to your Kafka Brokers. `plugin.path` should be configured to point to the install directory of your Kafka Connect Sink and Source Connectors. For more information on installing Kafka Connect plugins please refer to the [Confluent Documentation.](https://docs.confluent.io/current/connect/userguide.html#id3)
4. Place the jar file created by `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
4. Place the jar file created by `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
5. Run `.$KAFKA_HOME/bin/connect-distributed.sh $KAFKA_HOME/config/connect-distributed.properties` to start Kafka Connect.
6. Run the following command to create connector tasks. Adjust `topics` to configure the Kafka topic to be ingested, `splunk.indexes` to set the destination Splunk indexes, `splunk.hec.token` to set your Http Event Collector (HEC) token and `splunk.hec.uri` to the URI for your destination Splunk HEC endpoint. For more information on Splunk HEC configuration refer to [Splunk Documentation.](http://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector)

Expand All @@ -42,7 +42,7 @@ Splunk Connect for Kafka is a Kafka Connect Sink for Splunk with the following f
"splunk.hec.uri": "<SPLUNK_HEC_URI:SPLUNK_HEC_PORT>",
"splunk.hec.token": "<YOUR_TOKEN>"
}
}'
}'
```

7. Verify that data is flowing into your Splunk platform instance by searching using the index specified in the configuration.
Expand Down Expand Up @@ -111,7 +111,7 @@ Use the below schema to configure Splunk Connect for Kafka
"splunk.hec.socket.timeout": "<timeout in seconds>",
"splunk.hec.track.data": "<true|false, tracking data loss and latency, for debugging lagging and data loss>"
"splunk.header.support": "<true|false>",
"splunk.header.custom": "<list-of-custom-headers-to-be-used-from-kafka-headers-separated-by-comma>",
"splunk.header.custom": "<list-of-custom-headers-to-be-used-from-kafka-headers-separated-by-comma>",
"splunk.header.index": "<header-value-to-be-used-as-splunk-index>",
"splunk.header.source": "<header-value-to-be-used-as-splunk-source>",
"splunk.header.sourcetype": "<header-value-to-be-used-as-splunk-sourcetype>",
Expand Down Expand Up @@ -154,6 +154,7 @@ Use the below schema to configure Splunk Connect for Kafka
| `splunk.hec.max.outstanding.events` | Maximum amount of un-acknowledged events kept in memory by connector. Will trigger back-pressure event to slow down collection if reached. | `1000000` |
| `splunk.hec.max.retries` | Amount of times a failed batch will attempt to resend before dropping events completely. Warning: This will result in data loss, default is `-1` which will retry indefinitely | `-1` |
| `splunk.hec.backoff.threshhold.seconds` | The amount of time Splunk Connect for Kafka waits to attempt resending after errors from a HEC endpoint." | `60` |
| `splunk.hec.lb.poll.interval` | Specify this parameter(in seconds) to control the polling interval(increase to do less polling, decrease to do more frequent polling) | `120` |
### Acknowledgement Parameters
#### Use Ack
| Name | Description | Default Value |
Expand Down Expand Up @@ -193,7 +194,7 @@ See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User

## Benchmark Results

See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User/Planyourdeployment) for benchmarking results.
See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User/Planyourdeployment) for benchmarking results.

## Scale out your environment

Expand Down
4 changes: 2 additions & 2 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<groupId>com.github.splunk.kafka.connect</groupId>
<artifactId>splunk-kafka-connect</artifactId>
<version>v1.3.0-SNAPSHOT</version>
<version>v2.0</version>
<name>splunk-kafka-connect</name>

<properties>
Expand Down Expand Up @@ -308,4 +308,4 @@

</plugins>
</build>
</project>
</project>
4 changes: 2 additions & 2 deletions src/main/resources/version.properties
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
githash=
gitbranch=release/1.3.x
gitversion=v1.3.0-SNAPSHOT
gitbranch=release/2.0.x
gitversion=v2.0