Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disttask: fix removing meta when met network partition for so long then recover from it (#48005) #48024

Merged

Conversation

ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #48005

What problem does this PR solve?

Issue Number: close #47954

Problem Summary:
When a prolonged encounter network partition, the dist task framework may have chance to remove meta for the tidb node.
When the tidb node recovers from the network partition, the meta loss.
Then dispatcher may has chance to detect no available nodes to dispatch subtasks.

What is changed and how it works?

Add recoverMetaLoop which inits and recovers dist_framework_meta for the tidb node running the scheduler manager.
This is necessary when the TiDB node experiences a prolonged network partition and the dispatcher deletes dist_framework_meta. When the TiDB node recovers from the network partition, we need to re-insert the metadata.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No need to test
    • I checked and no code files have been changed.
image

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

enhance distributed task framework when it meets prolonged network partitions.

@ti-chi-bot ti-chi-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-7.5 This PR is cherry-picked to release-7.5 from a source PR. labels Oct 27, 2023
@ti-chi-bot ti-chi-bot added the cherry-pick-approved Cherry pick PR approved by release team. label Oct 27, 2023
@ti-chi-bot ti-chi-bot bot added the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Oct 27, 2023
@ti-chi-bot
Copy link

ti-chi-bot bot commented Oct 27, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tangenta, ywqzzy

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added approved lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Oct 27, 2023
@ti-chi-bot
Copy link

ti-chi-bot bot commented Oct 27, 2023

[LGTM Timeline notifier]

Timeline:

  • 2023-10-27 03:13:47.16835864 +0000 UTC m=+2577224.755468786: ☑️ agreed by ywqzzy.
  • 2023-10-27 03:14:15.778104761 +0000 UTC m=+2577253.365214905: ☑️ agreed by tangenta.

@hawkingrei
Copy link
Member

/retest

2 similar comments
@hawkingrei
Copy link
Member

/retest

@ywqzzy
Copy link
Contributor

ywqzzy commented Oct 27, 2023

/retest

@ti-chi-bot ti-chi-bot bot merged commit b1966e7 into pingcap:release-7.5 Oct 27, 2023
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved cherry-pick-approved Cherry pick PR approved by release team. lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-7.5 This PR is cherry-picked to release-7.5 from a source PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants