-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDK - Capturing function dependencies when creating lightweight components #1372
SDK - Capturing function dependencies when creating lightweight components #1372
Conversation
@gaoning777 I've created an issue to track Dill vs. Cloudpickle research in future. #1387 |
As requested by Ning
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Ark-kun The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test kubeflow-pipeline-e2e-test |
I've introduced code pickling to capture dependencies in kubeflow#1372 Later I've discovered that there is a serious opcode incompatibility between python versions 3.5 and 3.6+. See my analysis of the issue: cloudpipe/cloudpickle#293 Dues to this issue I decided to switch back to using source code copying by default and to continue improving it. Until we stop supporting python 3.5 (kubeflow#668) it's too dangerous to use code pickling by default. Code pickling can be enabled by specifying `pickle_code=True` when calling `func_to_container_op`
I've introduced code pickling to capture dependencies in kubeflow#1372 Later I've discovered that there is a serious opcode incompatibility between python versions 3.5 and 3.6+. See my analysis of the issue: cloudpipe/cloudpickle#293 Dues to this issue I decided to switch back to using source code copying by default and to continue improving it. Until we stop supporting python 3.5 (kubeflow#668) it's too dangerous to use code pickling by default. Code pickling can be enabled by specifying `pickle_code=True` when calling `func_to_container_op`
I've introduced code pickling to capture dependencies in kubeflow#1372 Later I've discovered that there is a serious opcode incompatibility between python versions 3.5 and 3.6+. See my analysis of the issue: cloudpipe/cloudpickle#293 Dues to this issue I decided to switch back to using source code copying by default and to continue improving it. Until we stop supporting python 3.5 (kubeflow#668) it's too dangerous to use code pickling by default. Code pickling can be enabled by specifying `pickle_code=True` when calling `func_to_container_op`
I've introduced code pickling to capture dependencies in kubeflow#1372 Later I've discovered that there is a serious opcode incompatibility between python versions 3.5 and 3.6+. See my analysis of the issue: cloudpipe/cloudpickle#293 Dues to this issue I decided to switch back to using source code copying by default and to continue improving it. Until we stop supporting python 3.5 (kubeflow#668) it's too dangerous to use code pickling by default. Code pickling can be enabled by specifying `pickle_code=True` when calling `func_to_container_op`
I've introduced code pickling to capture dependencies in #1372 Later I've discovered that there is a serious opcode incompatibility between python versions 3.5 and 3.6+. See my analysis of the issue: cloudpipe/cloudpickle#293 Dues to this issue I decided to switch back to using source code copying by default and to continue improving it. Until we stop supporting python 3.5 (#668) it's too dangerous to use code pickling by default. Code pickling can be enabled by specifying `pickle_code=True` when calling `func_to_container_op`
…ubeflow#1372) * Added http(s) header support extracted from env var + tar format * Created http(s) uri secret support to extract headers * Created tests for http(s) secret extraction * Added condition to extract secret volume from uri * Added how to set headers for http(s) uri request in secret * Added minor note on formatting * Added minor note on how to convert to base64 * refactored variable name and introduced constant * Secret Data Key Header -> Headers * Changed base-uri -> host-uri for consistency * Checked secret data for https-host-uri instead of annotation * Moved host uri to secret data * Added parsing logic for headers/host uri in secret data * Fixed test to match host uri in secret data * Updated variables for clarity * Stored header as a json string in env * Parse header as json * Update headers to store json * Removed unnecessary code
…eflow#1385) * fix aix mnist example * adapt the aixserver to accept inputs on the explainer's parameters * Update main README to use v1beta1 sklearn as example (kubeflow#1384) * Use latest image for default overlay * Rename to irisv2 * Fix release gen script * Move local gateway section * Update quick install * Add kubeflow release assets * Update quick install for 0.5 * Remove crd config * Move sklearn examples * Fix local gateway * fix sklearn iris v2 example * Update docs/samples/v1beta1/sklearn/v1/README.md Co-authored-by: yuzliu <55463421+yuzliu@users.noreply.github.com> Co-authored-by: yuzliu <55463421+yuzliu@users.noreply.github.com> * the original parameters change will not modify parameters from the request * Update MMS readme (kubeflow#1380) * update MMS readme * format * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Update docs/MULTIMODELSERVING_GUIDE.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> * update readme * remove duplicate Co-authored-by: Dan Sun <dsun20@bloomberg.net> * Feature to include headers in http/https service requests (non-MMS) (kubeflow#1372) * Added http(s) header support extracted from env var + tar format * Created http(s) uri secret support to extract headers * Created tests for http(s) secret extraction * Added condition to extract secret volume from uri * Added how to set headers for http(s) uri request in secret * Added minor note on formatting * Added minor note on how to convert to base64 * refactored variable name and introduced constant * Secret Data Key Header -> Headers * Changed base-uri -> host-uri for consistency * Checked secret data for https-host-uri instead of annotation * Moved host uri to secret data * Added parsing logic for headers/host uri in secret data * Fixed test to match host uri in secret data * Updated variables for clarity * Stored header as a json string in env * Parse header as json * Update headers to store json * Removed unnecessary code * Update docs/samples/explanation/aix/mnist/README.md * fix aix mnist example * adapt the aixserver to accept inputs on the explainer's parameters * the original parameters change will not modify parameters from the request * Update docs/samples/explanation/aix/mnist/README.md Co-authored-by: Dan Sun <dsun20@bloomberg.net> Co-authored-by: yuzliu <55463421+yuzliu@users.noreply.github.com> Co-authored-by: abchoo <77467773+abchoo@users.noreply.github.com> Co-authored-by: Animesh Singh <singhan@us.ibm.com>
Lightweight components convert a python function into a component usable as a pipeline step.
Previously, not every function could be converted - the function must be self-contained (not use imports, variables, functions and classes defined outside of the function).
This change removes those limitations.
Which dependencies are now captured alongside the function? All objects that are in the same python module as the function being converted to pipeline op.
Everything outside of the function's module will be referenced and must exist in the base image or be installed dynamically.
This change is