A pinging project.
__,---,
.---. /__|o\ ) .-"-. .----.""".
/ 6_6 `-\ / / / 4 4 \ /____/ (0 )\
\_ (__\ ,) (, \_ v _/ `--\_ /
// \\ // \\ // \\ // \\
(( )) {( )} (( )) {{ }}
=======""===""=========""===""======""===""=========""===""=======
jgs ||| ||||| ||| |||
| ||| | '|'
ASCII Copyright
$ pip install -e git://github.com/byashimov/pingeon.git#egg=pingeon
1. Producer
===========
+---------------+ +--------------+ +-----------------+
| | func() | | Log | |
| Regular job | -----> | Producer | ----> | Kafka producer |
| | | | | |
+---------------+ +--------------+ +-----------------+
^
| func() -> dict
+-------------+
| +-------+ | Create Log
| | check | | with unique Log.uid
| +-------+ |
| |
| +-------+ |
| | check | |
| +-------+ |
+-------------+
---------------------------------
---------------------------------/
2. Consumer
===========
+---------------+ +--------------+ +-----------------+
| | func() | | Log | |
| Regular job | -----> | Consumer | <---- | Kafka consumer |
| | | | | |
+---------------+ +--------------+ +-----------------+
| Log
V
+-----------------+
INSERT | |
ON CONFLICT UID | Postgres |
DO NOTHING | client |
PARTITION BY RANGE | |
+-----------------+
Project contains several components:
- A regular job
worker
. Runs givenfunc
every giveninterval
in seconds producer
which run any amount ofcheckers
and save those result as a labeledLog
with an unique uid to Kafka- "Checkers" are just regular functions
consumer
is also run byworker
, reads Kafka topic and saves data to partitioned table in Postgres. Since Kafka guarantees at least on delivery it doesn't fail with existing log uid.
Every component is well isolated by simple interfaces:
worker
run any async functionproducer
run any async function, which however must return a dictionarychecker
does whatever it does until it returns a dictionary- both
consumer
andproducer
use "repositories" to read or save data. All of them useLog
object as a contract.
See tests/test_integration.py
for usage example.