Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master: pd panic during running workload with multiple resource groups #6973

Closed
mayjiang0203 opened this issue Aug 22, 2023 · 1 comment · Fixed by #6983
Closed

master: pd panic during running workload with multiple resource groups #6973

mayjiang0203 opened this issue Aug 22, 2023 · 1 comment · Fixed by #6983
Assignees
Labels
affects-7.1 severity/critical type/bug The issue is confirmed as a bug.

Comments

@mayjiang0203
Copy link

mayjiang0203 commented Aug 22, 2023

Bug Report

What did you do?

  1. create 1000 resource groups and assign them to 1000 different users.
  2. run workload with 32 different users.

What did you expect to see?

No errors report.

What did you see instead?

One pd panic.

{"namespace":"e2e-oltp-multiple-resource-group-tps-2071148-1-37","level":"FATAL","container":"pd","log":"[log.go:87] [panic] [recover={}] [stack=\"[github.com/tikv/pd/pkg/utils/logutil.LogPanic\\n\\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/pkg/utils/logutil/log.go:87\\nruntime.gopanic\\n\\t/usr/local/go/src/runtime/panic.go:914\\ngithub.com/prometheus/client_golang/prometheus.(*counter).Add\\n\\t/go/pkg/mod/github.com/prometheus/client_golang@v1.11.1/prometheus/counter.go:109\\ngithub.com/tikv/pd/pkg/mcs/resourcemanager/server.(*Manager).backgroundMetricsFlush\\n\\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/pkg/mcs/resourcemanager/server/manager.go:327](http://github.com/tikv/pd/pkg/utils/logutil.LogPanic//n//t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/pkg/utils/logutil/log.go:87//nruntime.gopanic//n//t/usr/local/go/src/runtime/panic.go:914//ngithub.com/prometheus/client_golang/prometheus.(*counter).Add//n//t/go/pkg/mod/github.com/prometheus/client_golang@v1.11.1/prometheus/counter.go:109//ngithub.com/tikv/pd/pkg/mcs/resourcemanager/server.(*Manager).backgroundMetricsFlush//n//t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/pkg/mcs/resourcemanager/server/manager.go:327)\"]","pod":"tc-pd-0"}

What version of PD are you using (pd-server -V)?

./pd-server -V
 Release Version: v7.4.0-alpha
Edition: Community
Git Commit Hash: 346e7716e2598dfba2db6afa73c3731e15449f49
Git Branch: heads/refs/tags/v7.4.0-alpha
UTC Build Time:  2023-08-16 11:36:22
@mayjiang0203 mayjiang0203 added the type/bug The issue is confirmed as a bug. label Aug 22, 2023
@mayjiang0203 mayjiang0203 changed the title master: pd panic after create multiple resource groups master: pd panic during running workload with multiple resource groups Aug 22, 2023
@mayjiang0203
Copy link
Author

mayjiang0203 commented Aug 22, 2023

/assign @CabinfeverB
/severity Critical

@ti-chi-bot ti-chi-bot bot closed this as completed in #6983 Sep 1, 2023
ti-chi-bot bot added a commit that referenced this issue Sep 1, 2023
close #6973

Signed-off-by: Cabinfever_B <cabinfeveroier@gmail.com>

Co-authored-by: ShuNing <nolouch@gmail.com>
Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
ti-chi-bot pushed a commit to ti-chi-bot/pd that referenced this issue Sep 1, 2023
close tikv#6973

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
ti-chi-bot bot pushed a commit that referenced this issue Sep 5, 2023
close #6973

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
Signed-off-by: Cabinfever_B <cabinfeveroier@gmail.com>

Co-authored-by: Yongbo Jiang <cabinfeveroier@gmail.com>
Co-authored-by: Cabinfever_B <cabinfeveroier@gmail.com>
Co-authored-by: Hu# <jinhao.hu@pingcap.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-7.1 severity/critical type/bug The issue is confirmed as a bug.
Projects
Development

Successfully merging a pull request may close this issue.

2 participants