Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

distribute framework: add planner #46395

Merged
merged 16 commits into from
Sep 4, 2023
Prev Previous commit
Next Next commit
add framework planner
  • Loading branch information
GMHDBJD committed Aug 24, 2023
commit 69d8ed6e320329a872f9f6ca677dbdaba763e9a9
9 changes: 8 additions & 1 deletion disttask/framework/planner/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,14 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library")

go_library(
name = "planner",
srcs = ["plan.go"],
srcs = [
"plan.go",
"planner.go",
],
importpath = "github.com/pingcap/tidb/disttask/framework/planner",
visibility = ["//visibility:public"],
deps = [
"//disttask/framework/storage",
"//sessionctx",
],
)
9 changes: 9 additions & 0 deletions disttask/framework/planner/plan.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,20 @@ package planner

import (
"context"

"github.com/pingcap/tidb/sessionctx"
)

// PlanCtx is the context for planning.
type PlanCtx struct {
Ctx context.Context

// integrate with current distribute framework
SessionCtx sessionctx.Context
TaskKey string
TaskType string
ThreadCnt int

// PreviousSubtaskMetas is a list of subtask metas from previous step.
// We can remove this field if we find a better way to pass the result between steps.
PreviousSubtaskMetas [][]byte
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hold task_table as member?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hold task_table as member?

The planner in tidb runtime also fetch meta data from tikv. So I think it's ok.

Copy link
Contributor Author

@GMHDBJD GMHDBJD Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using PreviousSubtaskMetas is to express that PreviousSubtaskMetas is the result of the previous step. If we use task_table, our planner will be coupled with the system table again, although adding a new interface would be more convenient. 🤔

Expand Down
34 changes: 34 additions & 0 deletions disttask/framework/planner/planner.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
// Copyright 2023 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package planner

import "github.com/pingcap/tidb/disttask/framework/storage"

type Planner struct{}

// Run runs the distribute plan.
func (p *Planner) Run(planCtx PlanCtx, plan LogicalPlan) (int64, error) {
globalTaskManager, err := storage.GetTaskManager()
if err != nil {
return 0, err
}

taskMeta, err := plan.ToTaskMeta()
if err != nil {
return 0, err
}

return globalTaskManager.AddGlobalTaskWithSession(planCtx.SessionCtx, planCtx.TaskKey, planCtx.TaskType, planCtx.ThreadCnt, taskMeta)
}
7 changes: 5 additions & 2 deletions disttask/importinto/job.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,14 +175,17 @@ func (ti *DistImporter) SubmitTask(ctx context.Context) (int64, *proto.Task, err
if err2 != nil {
return err2
}
task := TaskMeta{

// TODO: use planner.Run to run the logical plan
// now creating import job and submitting distributed task should be in the same transaction.
logicalPlan := &LogicalPlan{
JobID: jobID,
Plan: *plan,
Stmt: ti.stmt,
EligibleInstances: instances,
ChunkMap: ti.chunkMap,
}
taskMeta, err2 := json.Marshal(task)
taskMeta, err2 := logicalPlan.ToTaskMeta()
if err2 != nil {
return err2
}
Expand Down