Closed
Description
🐞Describe the bug
- Only create an issue here for bugs in the coremltools Python package. If this is a bug with the Core ML Framework or Xcode, please submit your bug here: https://developer.apple.com/bug-reporting/
- A clear and brief description of the bug.
Error: AttributeError: 'NoneType' object has no attribute 'rmtree' when saving model
Stack Trace
- If applicable, please paste the complete stack trace.
Exception ignored in: <function MLModel.del at 0x1355b8790>
Traceback (most recent call last):
File "/Users/USER/miniforge3/envs/coremltools/lib/python3.9/site-packages/coremltools/models/model.py", line 366, in del
AttributeError: 'NoneType' object has no attribute 'rmtree'
To Reproduce
- Please add a minimal code example that can reproduce the error when running it.
import tensorflow as tf
import urllib
import coremltools as ct
print("downloading mobileNet model")
# Download MobileNetv2 (using tf.keras)
keras_model = tf.keras.applications.MobileNetV2(
weights="imagenet",
input_shape=(224, 224, 3,),
classes=1000,
)
print("downloading labels")
label_url = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
class_labels = urllib.request.urlopen(label_url).read().splitlines()
class_labels = class_labels[1:] # remove the first class which is background
assert len(class_labels) == 1000
# make sure entries of class_labels are strings
for i, label in enumerate(class_labels):
if isinstance(label, bytes):
class_labels[i] = label.decode("utf8")
# Define the input type as image,
# set pre-processing parameters to normalize the image
# to have its values in the interval [-1,1]
# as expected by the mobilenet model
image_input = ct.ImageType(shape=(1, 224, 224, 3,),
bias=[-1,-1,-1], scale=1/127)
# set class labels
classifier_config = ct.ClassifierConfig(class_labels)
print("converting to coreml")
# Convert the model using the Unified Conversion API to an ML Program
model = ct.convert(
keras_model,
convert_to="mlprogram",
inputs=[image_input],
classifier_config=classifier_config,
)
# Set feature descriptions (these show up as comments in XCode)
model.input_description["input_1"] = "Input image to be classified"
model.output_description["classLabel"] = "Most likely image category"
# Set model author name
model.author = '"Original Paper: Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen'
# Set the license of the model
model.license = "Please see https://github.com/tensorflow/tensorflow for license information, and https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet for the original source of the model."
# Set a short description for the Xcode UI
model.short_description = "Detects the dominant objects present in an image from a set of 1001 categories such as trees, animals, food, vehicles, person etc. The top-1 accuracy from the original publication is 74.7%."
# Set a version for the model
model.version = "2.0"
print("saving model")
# Save model as a Core ML model package
model.save("MobileNetV2.mlpackage")
# Load the saved model
loaded_model = ct.models.MLModel("MobileNetV2.mlpackage")
- If the model conversion succeeds, but there is a numerical mismatch in predictions, please include code used for comparisons.
System environment (please complete the following information):
- coremltools version:
- OS (e.g. MacOS version or Linux type): macOS 12.4 (21F79)
- Any other relevant version information:
- e.g. PyTorch or TensorFlow version
requirements.txt
absl-py @ file:///home/conda/feedstock_root/build_artifacts/absl-py_1606234718434/work
aiohttp @ file:///Users/runner/miniforge3/conda-bld/aiohttp_1649013207063/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1636093929600/work
astunparse @ file:///home/conda/feedstock_root/build_artifacts/astunparse_1610696312422/work
async-timeout @ file:///home/conda/feedstock_root/build_artifacts/async-timeout_1640026696943/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1640799537051/work
blinker==1.4
brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1648854242877/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cachetools @ file:///home/conda/feedstock_root/build_artifacts/cachetools_1633010882559/work
certifi==2022.5.18.1
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1636046173594/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1644853463426/work
clang==5.0
click @ file:///Users/runner/miniforge3/conda-bld/click_1651215424788/work
coremltools==6.0b1
cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1652967108255/work
flatbuffers==1.12
frozenlist @ file:///Users/runner/miniforge3/conda-bld/frozenlist_1648771983520/work
gast @ file:///home/conda/feedstock_root/build_artifacts/gast_1596839682936/work
google-auth @ file:///home/conda/feedstock_root/build_artifacts/google-auth_1629296548061/work
google-auth-oauthlib @ file:///home/conda/feedstock_root/build_artifacts/google-auth-oauthlib_1630497468950/work
google-pasta==0.2.0
grpcio @ file:///Users/runner/miniforge3/conda-bld/grpcio_1653138938320/work
h5py @ file:///Users/runner/miniforge3/conda-bld/h5py_1609497507927/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1642433548627/work
importlib-metadata @ file:///Users/runner/miniforge3/conda-bld/importlib-metadata_1653252883118/work
keras @ file:///home/conda/feedstock_root/build_artifacts/keras_1637159014053/work/keras-2.6.0-py2.py3-none-any.whl
Keras-Preprocessing @ file:///home/conda/feedstock_root/build_artifacts/keras-preprocessing_1610713559828/work
Markdown @ file:///home/conda/feedstock_root/build_artifacts/markdown_1651821407140/work
mpmath==1.2.1
multidict @ file:///Users/runner/miniforge3/conda-bld/multidict_1648882473246/work
numpy @ file:///Users/runner/miniforge3/conda-bld/numpy_1649281657907/work
oauthlib @ file:///home/conda/feedstock_root/build_artifacts/oauthlib_1643507977997/work
opt-einsum @ file:///home/conda/feedstock_root/build_artifacts/opt_einsum_1617859230218/work
packaging==21.3
protobuf==3.20.1
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
PyJWT @ file:///home/conda/feedstock_root/build_artifacts/pyjwt_1652398519695/work
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1643496850550/work
pyparsing==3.0.9
PySocks @ file:///Users/runner/miniforge3/conda-bld/pysocks_1648857374584/work
pyu2f @ file:///home/conda/feedstock_root/build_artifacts/pyu2f_1604248910016/work
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1641580202195/work
requests-oauthlib @ file:///home/conda/feedstock_root/build_artifacts/requests-oauthlib_1643557462909/work
rsa @ file:///home/conda/feedstock_root/build_artifacts/rsa_1637781155505/work
scipy @ file:///Users/runner/miniforge3/conda-bld/scipy_1653074075583/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1590081179328/work
sympy==1.10.1
tensorboard @ file:///home/conda/feedstock_root/build_artifacts/tensorboard_1629677129676/work/tensorboard-2.6.0-py3-none-any.whl
tensorboard-data-server @ file:///Users/runner/miniforge3/conda-bld/tensorboard-data-server_1649932898779/work/tensorboard_data_server-0.6.0-py3-none-macosx_11_0_arm64.whl
tensorboard-plugin-wit @ file:///home/conda/feedstock_root/build_artifacts/tensorboard-plugin-wit_1641458951060/work/tensorboard_plugin_wit-1.8.1-py3-none-any.whl
tensorflow-estimator==2.9.0
tensorflow-macos==2.6.0
tensorflow-metal==0.5.0
termcolor==1.1.0
tqdm==4.64.0
typing-extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1602702424206/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1647489083693/work
Werkzeug @ file:///home/conda/feedstock_root/build_artifacts/werkzeug_1651670883478/work
wrapt @ file:///Users/runner/miniforge3/conda-bld/wrapt_1624972047019/work
yarl @ file:///Users/runner/miniforge3/conda-bld/yarl_1648966561632/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1649012893348/work
Additional context
- Add any other context about the problem here.