Skip to content

Commit daa35db

Browse files
author
Nevroz Arslan
committed
[doc] change wording and indentation
1 parent 94efcef commit daa35db

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

README.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -96,84 +96,84 @@ The project requires a running Cassandra database instance and a Redis database.
9696
Call this under the project folder setup them on your local environment.
9797

9898
```sh
99-
docker-compose up -d
99+
docker-compose up -d
100100
```
101101
Cassandra needs a bit of time to establish its configuration.
102102
The following command will help to find out if it is ready.
103+
103104
```sh
104-
docker exec -it cassandra-service cqlsh -e "describe keyspaces"
105-
```
105+
docker exec -it cassandra-service cqlsh -e "describe keyspaces"
106+
```
107+
106108
Our database is ready, if there is no error.
107109
The following line will create a keyspace and table on the database.
108110

109111
```sh
110-
sudo sh ./scripts/provision.sh
112+
sudo sh ./scripts/provision.sh
111113
```
112114

113115
Now the setup is ready for our projects. Let's run the `job` command first. This
114116
command pulls all the product files from AWS, processes them via a pipeline, and
115117
stores them in a database. The first run might take a bit longer since we don't
116-
profit from the cache at the first run.
118+
profit from the cache for the first run.
117119

118120
```sh
119121
go run cmd/job/main.go -config dataflow.conf
120122
```
121123

122-
For example, if we try the following command, we'll get a slower duration of execution.
124+
If we try the following command, we'll get a slower duration of execution.
123125
```sh
124-
go run cmd/job/main.go -config dataflow.conf -concurrency 1
126+
go run cmd/job/main.go -config dataflow.conf -concurrency 1
125127
```
126128

127129
Start the `microservice` HTTP daemon.
128130
```sh
129131

130-
go run cmd/microservice/main.go -config dataflow.conf
132+
go run cmd/microservice/main.go -config dataflow.conf
131133
```
132134

133135
Test it:
134136
```sh
135137

136-
curl localhost:8080/product/42
138+
curl localhost:8080/product/42
137139
```
138140

139141
### Using `redis` cache
140142
To look into caching via redis, we can do a demonstration.
141143
First on the project directory call the following
142144
command to remove the database instances.
143-
```sh
144-
docker-compose down --remove-orphans
145+
```sh
146+
docker-compose down --remove-orphans
145147

146148
```
147149
Then re-create fresh instances of Cassandra and Redis..
148-
```
149-
sh docker-compose up -d
150+
```sh
151+
docker-compose up -d
150152
```
151153
Wait like 30 seconds to let Cassandra stand up.
152154
Use this as a readiness probe:
153155
```sh
154-
155-
docker exec -it cassandra-service cqlsh -e "describe keyspaces"
156+
docker exec -it cassandra-service cqlsh -e "describe keyspaces"
156157
```
157158

158159
After this, we have a pipeline setup with an empty cache layer.
159160
This means the data processing should take a bit longer. Let's check.
160161
This command measures how much time elapsed for the execution of the job.
161162

162163
```sh
163-
go run cmd/job/main.go -config dataflow.conf
164+
go run cmd/job/main.go -config dataflow.conf
164165
```
165166
I got the following result on my computer.
166167

167168
```sh
168-
Pipeline tooks 29.788467364s
169+
Pipeline tooks 29.788467364s
169170
```
170171

171172
After the second execution of the same command, I got the following output on my local.
172173
```sh
173-
174-
Pipeline tooks 15.399257129s
174+
Pipeline tooks 15.399257129s
175175
```
176-
This result shows the efficiency of the optimization through caching.
176+
This result shows benefit of caching.
177177
We save half of the execution time of the pipeline.
178178

179179

0 commit comments

Comments
 (0)