Skip to content

Fix local models being treated as S3 paths #1075

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 21, 2020

Conversation

RobertLucian
Copy link
Member

@RobertLucian RobertLucian commented May 21, 2020

@vishalbollu when the model's path is not an S3 path, the model cacher and validator still treats it as an S3 path instead of just a local path. This prevents the deployment of APIs with locally-provided models (ONNX and TensorFlow).

How to reproduce:

# cortex.yaml
- name: api-tf-cd7
  predictor:
    type: tensorflow
    path: predictor.py 
    model: ./inferentia-models/tensorflow/resnet50

With the above config, the resulting error would be
error: /home/robert/sandbox/feature/multi-model-endpoint/cortex_tf_sample.yaml: api-tf: predictor: model: "./inferentia-models/tensorflow/resnet50" is not a valid s3 path (e.g. s3://cortex-examples/iris-classifier/tensorflow is a valid s3 path)


checklist:

  • run make test and make lint
  • test manually (i.e. build/push all images, restart operator, and re-deploy APIs)

@vishalbollu vishalbollu merged commit 4ad6d53 into cortexlabs:master May 21, 2020
@RobertLucian RobertLucian deleted the fix/locally-saved-models branch May 21, 2020 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants