Skip to content

Commit 08a72f8

Browse files
author
Jayakrishnan JR
committed
first commit
0 parents  commit 08a72f8

File tree

1 file changed

+78
-0
lines changed

1 file changed

+78
-0
lines changed

README.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
2+
3+
## Kafka single node setup
4+
5+
On a single machine, a **3 broker** kafka instance is at best the minimum, for a hassle-free working. Also, **replication factor is set to 2**.
6+
7+
Say X,Y and Z are our kafka brokers. With replication factor 2, the data in X will be copied to both Y & Z, the data in Y will be copied to X & Z and the data of Z is copied to X & Y.
8+
9+
### Prerequisites
10+
- have java >= 1.8 installed.
11+
- get **binary** distribution of Kafka from [here](https://kafka.apache.org/downloads) .
12+
13+
### Setup
14+
Extract the contents of the kafka archive to a convenient place and `cd` into it. Use a terminal multiplexer to run the components that make the kafka eco-system.
15+
16+
#### Zookeeper
17+
- Edit the config file `config/server.properties` and change the `dataDir` entry to some place that does not get wiped after a reboot.
18+
Ex:`dataDir=/home/neoito/tmp/zookeeper`
19+
- Start the zookeeper instance with
20+
`$ bin/zookeeper-server-start.sh config/zookeeper.properties`
21+
22+
#### Kafka brokers
23+
- In the `config` folder there would be a `server.properties` file. This is the kafka server's config file. We need 3 instances of kafka brokers.
24+
- Make a copy. `$ cp config/server.properties config/server.b1.properties`
25+
- In the copy make the following changes
26+
```
27+
broker.id=1 #unique id for our broker instance
28+
port=9092 #port where it listens
29+
delete.topic.enable=true #if we want to delete kafka topic stored in broker
30+
log.dirs=/home/neoito/kafka-logs/01 #to a place thats not volatile
31+
advertised.host.name=10.0.0.81 #prevents leader not found error when connecting from remote machine
32+
```
33+
34+
- Make 2 more copies of this file and change the fields `broker.id`, `port` and `log.dirs` for each file.
35+
- Run the individual brokers like
36+
```
37+
$ bin/kafka-server-start.sh config/server.b1.properties
38+
$ bin/kafka-server-start.sh config/server.b2.properties
39+
$ bin/kafka-server-start.sh config/server.b3.properties
40+
```
41+
42+
**Tip : ** Executing a `$ jps` on the shell would give all JVM instances. To kill the processes `kill -9 <pid>` would do the trick.
43+
44+
##### Testing out the install
45+
- Create a topic with
46+
`$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 3 --topic <topicname>`
47+
- Push data onto it
48+
`$ bin/kafka-console-producer.sh --broker-list localhost:9092,localhost:9093,localhost:9094 --sync --topic <topicname>`
49+
- Fetch data from it
50+
`$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic <topicname> --from-beginning`
51+
52+
# Program Setup
53+
54+
To start the application in development mode,
55+
```
56+
npm run dev
57+
```
58+
59+
In production mode use,
60+
61+
```
62+
npm start
63+
```
64+
65+
### API Example
66+
67+
```
68+
{
69+
"kafka_topic": "click-to-call",
70+
"message": {
71+
"id": 1,
72+
"url": "heyhey.com"
73+
}
74+
}
75+
```
76+
77+
# kafka-example
78+
# kafka-example

0 commit comments

Comments
 (0)