Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: v5.10.0 possibly breaking change - how to migrate the rjk? #2661

Closed
1 task done
EinfachHans opened this issue Jul 17, 2024 · 12 comments · Fixed by #2658 · May be fixed by WontonSam/Cachimanstarter.dev#320 or WontonSam/Cachimanstarter.dev#349
Closed
1 task done
Labels
bug Something isn't working

Comments

@EinfachHans
Copy link

Version

5.10.0

Platform

NodeJS

What happened?

I really need some assistance/clarification about the change in 5.10.0, because for me i think it is a breaking change?

currently using 5.9.0 and dispatching events with a unique key and my pattern + timezone. I also set the name to the same as the key. bullmq then adds one entry into my redis, where the name is like bull:{{queue}}:repeat:{{key}}:{{someHash}}. It also sets a rjk with the key set, followed by pattern and timezone in your old format.

To be able to easily remove the job again i use the following logic:

  • Get all job ids by calling redis client directly with the pattern: bull:{{queue}}:repeat:{{key}}:* and stripe the first part so that only the id's are left
  • With that id i call getJob on the queue instance
  • Then i call removeRepeatableByKey with the rjk from the previously loaded job

Now i migrated to 5.10.0

What i experienced is:

  1. When adding a job, the rjk now only is the key i put into, not added any pattern information. This is great, because it easily allows me to remove it by that key
  2. bullmq seems to add two entries into the redis atm. One as before and one where the name has no hash added and with less information. Is this indeed? Because this doesn't match with my current delete implementation.

But the main reason why i think this is an breaking change is that after a update i will have two kind ob jobs: Previous already existing job's where the rjk has the old format and the new one's. The new one'c can be easily removed with the removeRepeatableByKey, the old one's not.

First i thought maybe an job will then have the new rjk format after it is processed once and the successor event is created but this seems not to be the case.

I hope i described everything clear, if not, please let me know. Can you please help me out with this and how to handle this change in the implementation?

How to reproduce.

No response

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@roggervalf
Copy link
Collaborator

hi @EinfachHans, I thought this case was addressed in removing legacy repeatable jobs. So I opened a pr to fix it

@EinfachHans
Copy link
Author

EinfachHans commented Jul 17, 2024

@roggervalf Thanks for the answer. For my understanding: What exactly will this pr do?

Delete old format job's with removeRepeatableByKey as well?

So that it (when called as removeRepeatableByKey('some-key')) deletes some-key (new format) and also some-key::::1 * 1 (old format)?

@roggervalf
Copy link
Collaborator

roggervalf commented Jul 18, 2024

hi @EinfachHans,

Delete old format job's with removeRepeatableByKey as well?

Yes, I added few tests to validate it

So that it (when called as removeRepeatableByKey('some-key')) deletes some-key (new format) and also some-key::::1 * 1 (old format)?

it depends on the key that is passed, our script validates if it's the old format first, so it will try to delete old format. If your key is respecting the new format it will remove the new format records associated with that key

github-actions bot pushed a commit that referenced this issue Jul 18, 2024
## [5.10.1](v5.10.0...v5.10.1) (2024-07-18)

### Bug Fixes

* **repeatable:** consider removing legacy repeatable job ([#2658](#2658)) fixes [#2661](#2661) ([a6764ae](a6764ae))
* **repeatable:** pass custom key as an args in addRepeatableJob to prevent CROSSSLOT issue ([#2662](#2662)) fixes [#2660](#2660) ([9d8f874](9d8f874))
@EinfachHans
Copy link
Author

@roggervalf To be honest i don't think that my problem is solved with this change. Ok i can now delete a entry via removeRepeatableByKey('remove::::* 1 * 1 *') as well, but that was not my problem.

My problem is that after the upgrade to > 5.9.0 i will have jobs with the old format and the new one. And i can't know while deleting in which format this job is. The custom logic i build and described above also doesn't work, because the second added entry for every repeatable job (the one without the hash) doesn't get removed.

So let's break it down:

  1. Is it correct that with the new version there are two entries added into redis for every repeatable job? On with and one without the hash?
  2. How can i migrate the jobs with the old format to the new format? That is the most important part, because somehow i have to be able to do this?

@roggervalf
Copy link
Collaborator

hi @EinfachHans,

Is it correct that with the new version there are two entries added into redis for every repeatable job? On with and one without the hash?

I could reproduce this case, this is an issue that I'm fixing in #2665, so the intention is to keep the old repeatable jobs as they are. Only new repeatable jobs to have this new format.

How can i migrate the jobs with the old format to the new format? That is the most important part, because somehow i have to be able to do this?

Remove your old repeatable jobs and re-add them, it should be done after #2665 is merged

@EinfachHans
Copy link
Author

@roggervalf Okay will test this behavior after #2665 is merged and released and let you know 😊

About the migration: This is the first time i have to do something like this will my bull jobs, what would you prefer to do exactly? I currently have in mind:

  1. Create a endpoint that does this:
  • Pause the queue
  • Goes via getRepeatableJobs over all jobs, remove & re-add them
  • Resume the queue
  1. After the new backend version is deployed: Call this endpoint

Would you consider this as the right way?

@roggervalf
Copy link
Collaborator

hey @EinfachHans that sounds good and sorry for the issues, we were not expecting to affect old formats but only add the new format in new records

@EinfachHans
Copy link
Author

@roggervalf hey again, just tested 5.10.3 and there are still two entries added for one repeatable job:

There is this entry added, which has in the name and all fields as i know them from before:
Bildschirmfoto 2024-07-20 um 12 03 57

Then there is also this entry added with the same name, but without the hash and with way less fields:
Bildschirmfoto 2024-07-20 um 12 03 44

Is this the indeed behaviour for the new format?

@roggervalf
Copy link
Collaborator

oh ok I see what you meant now, the first is the hash of a delayed job generated by the repeatable job, the second image is the hash of the actual repeatable job. This is expected with the new format

@EinfachHans
Copy link
Author

@roggervalf Thank you very much for your explanation and patience. Appreciate it!! 😊

@EinfachHans
Copy link
Author

@roggervalf Another problem: When i remove the job via removeRepeatableByKey the first entry, so the hash of a delayed job generated by the repeatable job is removed. The second entry, so the hash of the actual repeatable job is not removed from my redis.

@roggervalf
Copy link
Collaborator

thank you @EinfachHans I could reproduce it and fix it in v5.10.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
2 participants