Project describle a task scheduler system, target to run each scheduled task at specifed time once, even in a clustered environment.
- start the system
docker-compose up -d;-
add schedule Task check here
-
check log in consumer-logs folder
-
other service
- other services in the system
-
task scheduler producer
- accept request and transfrom the Task
-
kafka queue
- as task persistence
- let task scheduler consumer poll the task from queue
-
task scheduler consumer
- as task consumer, poll task from queue and store to database
- fetch task from database and execute it
-
task database
- store the task
-
task executor
- execute the task
-
Task Scheduler Producer
- end point to create schedule task
- push schedule task to kafka queue
-
Task Scheduler Consumer
- poll schedule task from kafka queue
- save schedule task to task database
- fetch schedule task from task database and execute it
- run once only
We are going to running schedule task only on the master in cluster, so the task consumer will also have following requeirments.
- master flag for each node
- only one alive master node in cluster
- only master node can assign or execute schedule task
- node will send heart beat in a fixed rate
- assume alive time to determine node is alive or not
- resolve master node in a fixed rate, to ensure master node is available
- application startup
- init application
- check alive master
- set to master if no available master
- save application node
- resolve master
assume node1 is master
- node1, node2 send heat beat
- node1, node2 execute resolve master
assume node1 is down now
- node2 send heat beat
- node2 execute resolve master, but no master is alive
- find new master and save
- execute scheduleing task
assume node1 is master
- node1, node2 start execute schedule task
- node1 is master, fetch schedule task from database
- node1 update task status, assign the schedule task to task executor
- node2 is not master, return



