Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] Flaky tutorial run: Could not find any valid schedule for task #13284

Closed
driazati opened this issue Nov 3, 2022 · 0 comments
Closed

[ci] Flaky tutorial run: Could not find any valid schedule for task #13284

driazati opened this issue Nov 3, 2022 · 0 comments
Labels
type:ci Relates to TVM CI infrastructure type: doc

Comments

@driazati
Copy link
Member

driazati commented Nov 3, 2022

Seen here and elsewhere: https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/4628/pipeline

[2022-11-02T15:27:34.310Z]   File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
[2022-11-02T15:27:34.310Z]   File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
[2022-11-02T15:27:34.310Z]     raise InstantiationError("Skipped because of invalid gpu kernel")
[2022-11-02T15:27:34.310Z] tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel
[2022-11-02T15:27:34.310Z] 
[2022-11-02T15:27:34.310Z] Traceback (most recent call last):
[2022-11-02T15:27:34.310Z]   24: TVMFuncCall
[2022-11-02T15:27:34.310Z]         at ../src/runtime/c_runtime_api.cc:477
[2022-11-02T15:27:34.310Z]   23: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1217
[2022-11-02T15:27:34.310Z]   22: Call
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1213
[2022-11-02T15:27:34.310Z]   21: operator()
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1731
[2022-11-02T15:27:34.310Z]   20: unpack_call<tvm::IRModule, 5, tvm::<lambda(tvm::te::Schedule, const tvm::runtime::Array<tvm::runtime::ObjectRef>&, const tvm::runtime::String&, const tvm::runtime::Map<tvm::te::Tensor, tvm::tir::Buffer>&, bool)> >
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1671
[2022-11-02T15:27:34.310Z]   19: run<>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   18: run<tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   17: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   16: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   15: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1631
[2022-11-02T15:27:34.310Z]   14: run<tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_, tvm::runtime::TVMMovableArgValueWithContext_>
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1646
[2022-11-02T15:27:34.310Z]   13: operator()
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:391
[2022-11-02T15:27:34.310Z]   12: tvm::LowerSchedule(tvm::te::Schedule, tvm::runtime::Array<tvm::runtime::ObjectRef, void> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<tvm::te::Tensor, tvm::tir::Buffer, std::hash<tvm::te::Tensor>, std::equal_to<tvm::te::Tensor>, std::allocator<std::pair<tvm::te::Tensor const, tvm::tir::Buffer> > > const&, tvm::GlobalVarSupply, bool)
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:377
[2022-11-02T15:27:34.310Z]   11: tvm::LowerWithPassList(tvm::IRModule, tvm::runtime::Array<tvm::transform::Pass, void>)
[2022-11-02T15:27:34.310Z]         at ../src/driver/driver_api.cc:272
[2022-11-02T15:27:34.310Z]   10: tvm::transform::Pass::operator()(tvm::IRModule) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:258
[2022-11-02T15:27:34.310Z]   9: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:274
[2022-11-02T15:27:34.310Z]   8: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:453
[2022-11-02T15:27:34.310Z]   7: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/ir/transform.cc:274
[2022-11-02T15:27:34.310Z]   6: tvm::tir::transform::PrimFuncPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
[2022-11-02T15:27:34.310Z]         at ../src/tir/ir/transform.cc:100
[2022-11-02T15:27:34.310Z]   5: tvm::runtime::TypedPackedFunc<tvm::tir::PrimFunc (tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext)>::operator()(tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1750
[2022-11-02T15:27:34.310Z]   4: tvm::tir::PrimFunc tvm::runtime::detail::typed_packed_call_dispatcher<tvm::tir::PrimFunc>::run<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::runtime::PackedFunc const&, tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&)
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1694
[2022-11-02T15:27:34.310Z]   3: tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::tir::PrimFunc, tvm::IRModule, tvm::transform::PassContext>(tvm::tir::PrimFunc&&, tvm::IRModule&&, tvm::transform::PassContext&&) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1618
[2022-11-02T15:27:34.310Z]   2: tvm::runtime::PackedFuncObj::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1217
[2022-11-02T15:27:34.310Z]   1: Call
[2022-11-02T15:27:34.310Z]         at ../include/tvm/runtime/packed_func.h:1213
[2022-11-02T15:27:34.310Z]   0: operator()
[2022-11-02T15:27:34.310Z]         at ../src/runtime/c_runtime_api.cc:534
[2022-11-02T15:27:34.310Z]   File "tvm/_ffi/_cython/./packed_func.pxi", line 56, in tvm._ffi._cy3.core.tvm_callback
[2022-11-02T15:27:34.310Z]   File "/workspace/python/tvm/autotvm/measure/measure_methods.py", line 871, in verify_pass
[2022-11-02T15:27:34.310Z]     raise InstantiationError("Skipped because of invalid gpu kernel")
[2022-11-02T15:27:34.310Z] tvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel	[('tile_f', [-1, 16, 32, 1]), ('tile_y', [-1, 1, 1, 1]), ('tile_x', [-1, 7, 1, 1]), ('tile_rc', [-1, 2, 64]), ('tile_ry', [-1, 1, 1]), ('tile_rx', [-1, 1, 3]), ('auto_unroll_max_step', 0), ('unroll_explicit', 0)],None,1324444
[2022-11-02T15:27:34.310Z] WARNING:root:Could not find any valid schedule for task Task(func_name=tutorial/conv2d_no_batching, args=(1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1)), kwargs={}, workload=('tutorial/conv2d_no_batching', 1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1))). A file containing the errors has been written to /tmp/tvm_tuning_errors_0x9nxpws.log.
[2022-11-02T15:27:34.310Z] DEBUG:autotvm:Finish loading 20 records
[2022-11-02T15:27:34.310Z] WARNING:autotvm:Cannot find config for target=cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32, workload=('tutorial/conv2d_no_batching', 1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1)). A fallback configuration is used, which may bring great performance regression.
[2022-11-02T15:27:34.310Z] DEBUG:autotvm:Finish loading 20 records

cc @Mousius @areusch @gigiblender @leandron

@driazati driazati added needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it type:ci Relates to TVM CI infrastructure labels Nov 3, 2022
@areusch areusch added needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it and removed needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it labels Nov 22, 2022
@driazati driazati added type: doc and removed needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it labels Nov 28, 2022
@tqchen tqchen closed this as completed Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:ci Relates to TVM CI infrastructure type: doc
Projects
None yet
Development

No branches or pull requests

3 participants