Skip to content

coremltools==4.0 breaks "test_coreml_codegen.py::test_annotate" #6668

@leandron

Description

@leandron

With recent release of coremltools 4.0 a couple days ago (https://pypi.org/project/coremltools/#history), It looks like test_coreml_codegen.py::test_annotate is now broken.

I'm raising this, as when we re-generate Docker images, this will certainly show up.

The error message I see is this:

=================================== FAILURES ===================================
________________________________ test_annotate _________________________________

    def test_annotate():
        mod = _create_graph()
        mod = transform.AnnotateTarget("coremlcompiler")(mod)
        mod = transform.PartitionGraph()(mod)
    
        expected = _create_graph_annotated()
>       assert tvm.ir.structural_equal(mod, expected, map_free_vars=True)
E       assert False
E        +  where False = <function structural_equal at 0x7f457623c510>(#[version = "0.0.5"]\ndef @main(%x: Tensor[(10, 10), float32], %y: Tensor[(10, 10), float32]) -> Tensor[(10, 10), float... -> Tensor[(10, 10), float32] {\n  add(%coremlcompiler_2_i0, %coremlcompiler_2_i0) /* ty=Tensor[(10, 10), float32] */\n}\n, #[version = "0.0.5"]\ndef @coremlcompiler_0(%coremlcompiler_0_i0: Tensor[(10, 10), float32], Primitive=1, Inline=1, Com...32], %y: Tensor[(10, 10), float32]) {\n  %0 = @coremlcompiler_0(%y);\n  %1 = @coremlcompiler_2(%x);\n  subtract(%0, %1)\n}\n, map_free_vars=True)
E        +    where <function structural_equal at 0x7f457623c510> = <module 'tvm.ir' from '/workspace/python/tvm/ir/__init__.py'>.structural_equal
E        +      where <module 'tvm.ir' from '/workspace/python/tvm/ir/__init__.py'> = tvm.ir

tests/python/contrib/test_coreml_codegen.py:95: AssertionError
________________________________ test_bias_add _________________________________

    def test_bias_add():
        for dtype in ["float16", "float32"]:
            xshape = (10, 2, 3, 4)
            bshape = (2,)
            rtol = 1e-2 if dtype == "float16" else 1e-5
            x = relay.var("x", shape=xshape, dtype=dtype)
            bias = relay.var("bias", dtype=dtype)
            z = relay.nn.bias_add(x, bias)
            func = relay.Function([x, bias], z)
    
            x_data = np.random.uniform(size=xshape).astype(dtype)
            y_data = np.random.uniform(size=bshape).astype(dtype)
    
>           verify_results(func, [x_data, y_data], "test_bias_add", rtol=rtol)

This is a summary of the coreml tests, as I see them:

tests/python/contrib/test_coreml_codegen.py::test_annotate FAILED        [  6%]
tests/python/contrib/test_coreml_codegen.py::test_compile_and_run SKIPPED [  6%]
tests/python/contrib/test_coreml_codegen.py::test_add PASSED             [  7%]
tests/python/contrib/test_coreml_codegen.py::test_multiply PASSED        [  8%]
tests/python/contrib/test_coreml_codegen.py::test_clip PASSED            [  9%]
tests/python/contrib/test_coreml_codegen.py::test_batch_flatten PASSED   [ 10%]
tests/python/contrib/test_coreml_codegen.py::test_expand_dims PASSED     [ 10%]
tests/python/contrib/test_coreml_codegen.py::test_relu PASSED            [ 11%]
tests/python/contrib/test_coreml_codegen.py::test_softmax PASSED         [ 12%]
tests/python/contrib/test_coreml_codegen.py::test_conv2d PASSED          [ 13%]
tests/python/contrib/test_coreml_codegen.py::test_global_avg_pool2d PASSED [ 13%]

For now, I think we should stick to coremltools==3.4 so that it is known to be working.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions