Taurus Queue offers a comprehensive queue ecosystem, simplifying the creation, execution, management, and monitoring of scalable and highly available queues. Leveraging the robust foundation of the Bull Project, Taurus Queue eliminates the intricacies of queue coding, allowing you to concentrate solely on your specific actions and rules, thus optimizing your time. It features a dedicated interface for efficient queue management and monitoring.
Monitor Screen (Monitoring only unhealth queues with autorefresh)
- Starting in this repository to create and run your first queue.
- Publish jobs to your queues using our publishers, compatible with multiple programming languages:
- Utilize Taurus Manager for:
- Pausing/unpausing, adding/removing jobs.
- Deleting, retrying, debuggin, viewing error logs and much more.
- Managing user permissions.
- Overseeing your queues.
- Implement Taurus Monitoring for real-time graphical insights of your entire ecosystem, integrating with Grafana and Prometheus:
- Queue Length
- Job Duration
- Queue States
- Failures by Queue
- Total Jobs Completed (All-Time/Periodic)
- Sum of Completed Jobs (All-Time/Periodic)
Release 1.0.0 Requires NodeJs 20.x
When running Docker your queue will automatically be running and you just need to include your business rules. :)
docker-compose up
You can use our default-business.js
located in business
folder and include your rules / actions inside the try block
...
try {
...
// your actions and rules here
...
} catch (error) {
...
Or you can create your own business (maybe multiple ones in same project, and even push from one queue to another one). For do that:
1 - Copy our default-business.js
located in business
, giving your name, in this sample we'll use myown-business.js
;
2 - Change the name of the class on the top and the bottom of the file From this:
...
const BaseBusiness = require('./base-business');
/**
* Example bussines job processor
*/
class MyOwnBusiness extends BaseBusiness { // changed from DefaultBusiness
...
module.exports = MyOwnBusiness; // changed from DefaultBusiness
3 - Include your rules / actions inside the try block
...
try {
...
// your actions and rules here
...
} catch (error) {
...
4 - Last but not least declare your business in the constructor.js
file inside the config
folder
const DefaultBusiness = require('../business/default-business');
const MyOwnBusiness = require('../business/myown-business'); // Added
module.exports = {
'default': DefaultBusiness,
'myown': MyOwnBusiness, // Added
};
Requires Redis.
You can start with Docker using compose tool.
If you want to run the default
queue just run the docker
docker-compose up
You can now run your queue worker. (if you want to, just change the ./ops/docker/dev/run.sh to your new business, changing the "default" word for your queue name, in this case "myown") like this:
#!/bin/sh
npm i
npm run dev myown 1
You can enter in container e run a queue worker by yourself passing your queue name and the debug mode (1 or 0)
docker exec -it taurus-queue bash
Running a queue worker in debug mode (with node flag --watch to update service when code changes)
npm run dev default 1
Running a queue worker in debug mode
npm start default 1
With the debug mode on, your queue will generate outputs from the log.debug
command, make your development and debug easer.
The log.show
command always have output, so use carefully
You can also run a cluster with multiples queue workers, in this case we running 5 workers.
- if you choose to do that, take care of your resources like Memory and CPU, it can be very, very heavy depends on the number of works and the operations of your business.
Running 5 queue workers in debug mode (with node flag --watch to update service when code changes)
npm dev-cluster default 5 1
Running 5 queue workers in debug mode
npm start-cluster default 5
Now that you have your workers running, it's time to push itens to your queue.
You can do that by running the producer.js
or the multi-producer.js
files in the example folder
.
As the name sujests, the producer.js
send one job to the queue and the multi-producer.js
send multiples.
In producer.js
you can pass as parameter the name of the queue, if you not inform the default
queue will be used.
You can also pass a JSON with your data. IF you not inform the JSON a default test data will be used.
If you want to run producer.js
in the docker:
docker exec taurus-queue node sample/producer.js default '{"data":"mydata"}'
If you inside the container or want to run locally:
node sample/producer.js default '{"data":"mydata"}'
In multi-producer.js
you pass as parameter the name of the queue and the number of jobs.
If you not inform the name, the default
queue will be used.
If you not inform the number of jobs it will use 2.
You can also pass a JSON with your data. IF you not inform the JSON a default test data will be used.
If you want to put 60 jobs on the default queue using docker:
docker exec taurus-queue node sample/multi-producer.js default 60 '{"data":"mydata"}'
If you inside the container or want to run locally:
node sample/multi-producer.js default 60 '{"data":"mydata"}'
If you use parallel processing with multiple workers, finding out when all jobs have completed successfully can be a complicated task due to asynchrony.
To deal with this, Taurus has a functionality that uses a Redis + Lua solution to ensure that the last job in a group was executed, to use it you will need to set a unique specific key with the amount of total queues you wanto to excecute, before sending to the queues, and decrement this key to each queue that runs successfully.
The last queue will know it is last and will allow you to perform finishing actions.
You need:
- Fill in the Redis data responsible for control in the .env file (we recommend not being the same person who manages the queues)
AUX_REDIS_HOST=taurus-redis
AUX_REDIS_PORT=6379
- When inserting into each queue you can use the CheckCompletion class to increment each job: (If you are outside the Taurus ecosystem you can just create a key in redis with SET command on your favorite language but the value MUST BE INTEGER greater than 0.)
const CheckCompletion = require('../core/check-completion.js');
...
const numberOfJobs = Number (100);
const checkCompletion = new CheckCompletion();
await checkCompletion.setInitialJobCounter(uniqueKeyToRepresenTheGroup, numberOfJobs);
- When each queue has finished executing, just call the decrement command and check if it returned zero, if it returned zero, this queue is the last one of this group and you can execute any command you find necessary
const CheckCompletion = require('../core/check-completion.js');
...
const checkCompletion = new CheckCompletion();
const taksRemaining = await checkCompletion.decrement(sameUniqueKeyToRepresenTheGroup);
if (taksRemaining === 0) {
console.log('All jobs are finished');
}
Want to contribute? Great!
The project using a simple code. Make a change in your file and be careful with your updates! Any new code will only be accepted with all viladations.
To ensure that the entire project is fine:
Run all validations
$ npm run check
Not Empty Foundation - Free codes, full minds