Skip to content

Conversation

@artonge
Copy link
Contributor

@artonge artonge commented Mar 20, 2025

  1. feat: Implements getSeenUsers to iterate over users
  2. chore: Refactor callForSeenUsers to use getSeenUsers
  3. feat: Limit ExpireTrash job to 30 minutes

@artonge artonge added enhancement 3. to review Waiting for reviews php Pull requests that update Php code labels Mar 20, 2025
@artonge artonge added this to the Nextcloud 32 milestone Mar 20, 2025
@artonge artonge self-assigned this Mar 20, 2025
@artonge artonge requested a review from a team as a code owner March 20, 2025 15:07
@artonge artonge requested a review from susnux March 24, 2025 14:22
@artonge artonge force-pushed the artonge/feat/allow_partial_seen_users branch from ea30adf to 310945d Compare March 24, 2025 14:42
Copy link
Contributor

@come-nc come-nc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering if we should use DateTime instead of int 🤷

@icewind1991
Copy link
Member

Instead of adding a limit to callForSeensUsers it might be better to instead have a method that returns all (seen) users as an iterator. Then the call site can do it's own limiting.

@artonge artonge force-pushed the artonge/feat/allow_partial_seen_users branch from 310945d to 23a5330 Compare March 26, 2025 15:01
@artonge
Copy link
Contributor Author

artonge commented Mar 26, 2025

Instead of adding a limit to callForSeensUsers it might be better to instead have a method that returns all (seen) users as an iterator. Then the call site can do it's own limiting.

Good idea and easy to implement :). Just pushed that version in the latest commit. Will remove the other change after review.

Note that this probably need to be backported

@artonge artonge force-pushed the artonge/feat/allow_partial_seen_users branch from 23a5330 to 7c5449b Compare March 26, 2025 15:10
@icewind1991
Copy link
Member

Looks good, you can also re-implement callForSeenUsers on top of getAllSeenUsers for some de-duplication

@artonge artonge force-pushed the artonge/feat/allow_partial_seen_users branch from 7c5449b to f5217f4 Compare March 26, 2025 20:10
@artonge artonge changed the title feat: Allow partial callForSeenUsers feat: Limit ExpireTrash job to 30 minutes Mar 26, 2025
@artonge artonge force-pushed the artonge/feat/allow_partial_seen_users branch 3 times, most recently from f7ec0e9 to 700f297 Compare March 26, 2025 20:33
@skjnldsv skjnldsv force-pushed the artonge/feat/allow_partial_seen_users branch from 700f297 to b7cd6fb Compare March 28, 2025 08:38
@skjnldsv skjnldsv added 4. to release Ready to be released and/or waiting for tests to finish and removed 3. to review Waiting for reviews labels Mar 28, 2025
@AndyScherzinger AndyScherzinger force-pushed the artonge/feat/allow_partial_seen_users branch from b7cd6fb to e28515f Compare March 29, 2025 10:50
artonge added a commit that referenced this pull request May 21, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request May 21, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
@nextcloud-bot nextcloud-bot mentioned this pull request Aug 19, 2025
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Aug 28, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 5, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 5, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 8, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
backportbot bot pushed a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
backportbot bot pushed a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>

[skip ci]
backportbot bot pushed a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>

[skip ci]
artonge added a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 9, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 11, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 11, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 11, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
artonge added a commit that referenced this pull request Sep 11, 2025
…lel run

- Follow-up of #51600

The original PR introduced the possibility to continue an `ExpireTrash` job by saving the offset. This was to prevent having to start over the whole user list when the job crashed or was killed.

But on big instances, one process is not enough to go through all the users in a timely manner. Supporting parallel run allows covering more ground faster.

This PR introduced this possibility. We are now storing the offset right away to allow another parallel job to pick up the task at that point. We are arbitrarily cutting the user list in chunk of 10 to not drastically overflow the 30 minutes time limit.

Signed-off-by: Louis Chemineau <louis@chmn.me>
@skjnldsv skjnldsv modified the milestones: Nextcloud 32, Nextcloud 33 Sep 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

4. to release Ready to be released and/or waiting for tests to finish enhancement php Pull requests that update Php code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants