-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TPCH q20 fails on master branch #5338
Comments
@birdstorm Could you give me the source table schema and some data to reproduce this issue ? |
@zz-jason just use original TPCH data is enough. |
ok, I'll try |
the plan is: +-----------------+-----------------+-------------------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
| id | parents | children | task | operator info | count |
+-----------------+-----------------+-------------------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
| TableScan_32 | Selection_33 | | cop | table:nation, range:(-inf,+inf), keep order:false | 0 |
| Selection_33 | | TableScan_32 | cop | eq(tpch.nation.n_name, ALGERIA) | 0 |
| TableReader_34 | HashLeftJoin_30 | | root | data:Selection_33 | 0 |
| TableScan_35 | | | cop | table:supplier, range:(-inf,+inf), keep order:false | 0 |
| TableReader_36 | HashLeftJoin_30 | | root | data:TableScan_35 | 0 |
| HashLeftJoin_30 | HashLeftJoin_25 | TableReader_34,TableReader_36 | root | inner join, small:TableReader_36, equal:[eq(tpch.nation.n_nationkey, tpch.supplier.s_nationkey)] | 0 |
| IndexScan_49 | | | cop | table:partsupp, index:PS_PARTKEY, PS_SUPPKEY, range:[<nil>,+inf], out of order:false | 0 |
| TableScan_50 | | | cop | table:partsupp, keep order:false | 0 |
| IndexLookUp_51 | MergeJoin_42 | | root | index:IndexScan_49, table:TableScan_50 | 0 |
| TableScan_52 | Selection_53 | | cop | table:part, range:(-inf,+inf), keep order:true | 0 |
| Selection_53 | | TableScan_52 | cop | like(tpch.part.p_name, green%, 92) | 0 |
| TableReader_54 | MergeJoin_42 | | root | data:Selection_53 | 0 |
| MergeJoin_42 | HashLeftJoin_41 | IndexLookUp_51,TableReader_54 | root | semi join, equal:[eq(tpch.partsupp.ps_partkey, tpch.part.p_partkey)], left key:tpch.partsupp.ps_partkey, right key:tpch.part.p_partkey | 0 |
| TableScan_60 | Selection_61 | | cop | table:lineitem, range:(-inf,+inf), keep order:false | 0 |
| Selection_61 | | TableScan_60 | cop | ge(tpch.lineitem.l_shipdate, 1993-01-01 00:00:00.000000), lt(tpch.lineitem.l_shipdate, 1994-01-01) | 0 |
| TableReader_62 | HashLeftJoin_41 | | root | data:Selection_61 | 0 |
| HashLeftJoin_41 | HashAgg_39 | MergeJoin_42,TableReader_62 | root | left outer join, small:TableReader_62, equal:[eq(tpch.partsupp.ps_partkey, tpch.lineitem.l_partkey) eq(tpch.partsupp.ps_suppkey, tpch.lineitem.l_suppkey)] | 0 |
| HashAgg_39 | Selection_38 | HashLeftJoin_41 | root | group by:tpch.partsupp.ps_partkey, tpch.partsupp.ps_suppkey, funcs:firstrow(tpch.partsupp.ps_partkey), firstrow(tpch.partsupp.ps_suppkey), firstrow(tpch.partsupp.ps_availqty), sum(tpch.lineitem.l_quantity) | 1 |
| Selection_38 | Projection_37 | HashAgg_39 | root | gt(cast(tpch.partsupp.ps_availqty), mul(0.5, 13_col_0)) | 0.8 |
| Projection_37 | HashLeftJoin_25 | Selection_38 | root | tpch.partsupp.ps_partkey, tpch.partsupp.ps_suppkey, tpch.partsupp.ps_availqty, mul(0.5, 13_col_0) | 0.8 |
| HashLeftJoin_25 | Projection_24 | HashLeftJoin_30,Projection_37 | root | semi join, small:Projection_37, equal:[eq(tpch.supplier.s_suppkey, tpch.partsupp.ps_suppkey)] | 0 |
| Projection_24 | Sort_23 | HashLeftJoin_25 | root | tpch.supplier.s_name, tpch.supplier.s_address | 0 |
| Sort_23 | | Projection_24 | root | tpch.supplier.s_name:asc | 0 |
+-----------------+-----------------+-------------------------------+------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+ the panic happened during the execution of |
from the log I printed, 10382 2017/12/08 19:10:08.637 join_result_generators.go:317: [error] *executor.leftOuterJoinResultGenerator.emitMatchedInners: len(outer)=4, len(inner)=4, len(buffer)=8
10383
10384 2017/12/08 19:10:08.637 join_result_generators.go:325: [error] *executor.leftOuterJoinResultGenerator.emitUnMatchedOuter: len(outer)=4, len(outputer.defaultInner)=1
10385
10386 2017/12/08 19:10:08.637 aggregation.go:259: [error] *aggregation.aggFunction(0xc420a16300).updateSum: a=&expression.Column{FromID:11, ColName:model.CIStr{O:"L_QUANTITY", L:"l_quantity"}, DBName:model.CIStr{O:"tpch", L:"tpch"}, OrigTblName:model.CIStr{O:"lineitem", L:"lineitem"}, TblName:model.CIStr{O:"lineitem", L:"lineitem"}, RetType:(*types.FieldType)(0xc4202a2f50), ID:5, Position:4, IsAggOrSubq:false, Index:5, hashcode:[]uint8{0x80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xb, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4}}, row.Len()=5 |
this issue is caused by |
Here Maybe there still exists some scene that break the constraint, I'll fix this issue firstly anyway. |
Maybe a better way is to calculate |
#5278 |
Currently, unless we push aggregation across the join, the default values of it will alway be all |
@winoros aggregation push down is disabled currently. 239 // NewSessionVars creates a session vars object.
240 func NewSessionVars() *SessionVars {
241 return &SessionVars{
242 Users: make(map[string]string),
243 Systems: make(map[string]string),
244 PreparedStmts: make(map[uint32]interface{}),
245 PreparedStmtNameToID: make(map[string]uint32),
246 PreparedParams: make([]interface{}, 10),
247 TxnCtx: &TransactionContext{},
248 RetryInfo: &RetryInfo{},
249 StrictSQLMode: true,
250 Status: mysql.ServerStatusAutocommit,
251 StmtCtx: new(stmtctx.StatementContext),
252 AllowAggPushDown: false,
253 BuildStatsConcurrencyVar: DefBuildStatsConcurrency,
254 IndexJoinBatchSize: DefIndexJoinBatchSize,
255 IndexLookupSize: DefIndexLookupSize,
256 IndexLookupConcurrency: DefIndexLookupConcurrency,
257 IndexSerialScanConcurrency: DefIndexSerialScanConcurrency,
258 DistSQLScanConcurrency: DefDistSQLScanConcurrency,
259 MaxRowCountForINLJ: DefMaxRowCountForINLJ,
260 MaxChunkSize: DefMaxChunkSize,
261 }
262 } |
Yes, but if we enable it and we can enable it in client in fact. The default value cannot be decided if we don't decide it when pushing agg down. |
Please answer these questions before submitting your issue. Thanks!
If possible, provide a recipe for reproducing the error.
run TPCH q20:
Following is TiDB's log:
tidb-server -V
)?The text was updated successfully, but these errors were encountered: