Service that allows survey creation, questions answering and statistics calculation.
Surveyor service contains 4 microservices and 3 third-party services:
- Manager - is responsible for creating/editing surveys, questions and
answers configuration. Writes surveys to
PostgreSQL
. - Requests service - is responsible for gathering survey's answers from
users. Reads surveys configuration from the same
PostgreSQL
asManager
. Writes all answers toKafka
. - Collector service. Reads answers from
Kafka
and aggregates statistics toMongoDB
. - Statistics service. Returns answers statistics for question. Reads
the same
MongoDB
asCollector
. - PostgreSQL for storing surveys, questions and answers configuration created with
Manager
. - Kafka for streaming answers data.
- MongoDB for online statistics. It doesn't store full statistics, instead is increments counters for every answer in question. Dynamic documents are used.
Survey/Question/Answer CRUD --> Manager --> Postgres
& Step-by-step attach ||
|| Surveys, Questions, Answers configuration
||
\/
Answered Survey ---> Requests ---------> Kafka --------> Collector --------> MongoDB
answers answers aggregated ||
statistics || statistics
||
\/
Get statistics request (question) --------> Statistics
- index
question_id
inMongoDB
- switch to avro or msgpack in
Kafka
Requests
read fromPostgreSQL
slave- add caching Surveys configuration from
Postgres
forRequests
Statistics
read fromMongoDB
slaveKafka
andRequests
to autoscale groups- Put answers in different topics (by Survey's country code) and start more
Collector
- old questions are not deleted.
In current version statistics with old (deleted) questions is also fetched from statistics backend.
It is frontend's job to filter deleted questions. It can be done with information, fetched from manager backend.
In future we can make sending notifications with schema changes to collector. - statistics in database is not full (only counters). It was made to allow real time (live on frontend) statistics showing. It would be impossible if we saved all statistics to sql database and query it with joins. If we will need whole statistics in future - we can easily add kafka to postgres stream.
make run
Will clone and build Docker images of services (if not built) and start all services via docker-compose.
Important: Avoid port conflicts with your running services.
See docker-compose.yml
for details.
Requirements:
-
make stop will stop all the services
Run integrations tests for surveyor microservises.
- create survey in
manager
- check created survey in
postgres
- get survey from
manager
- compare surveys
- send answers to
requests
- check
kafka
for answers - check answer statistics in
mongodb
(sometimes test is too fast and fails here. Just re-run) - check answer statistics via
statistics
make build && make run && make test
Important: Services should be accessible for tests.
Configuration:
Environment configuration is available in inventory/
Add your test in two steps:
- add your script in
script/tests/
. - and... that's all. Now second step