Skip to content

Commit e108ea0

Browse files
committed
Update version to 0.1.0
1 parent bb29f2c commit e108ea0

File tree

6 files changed

+7
-7
lines changed

6 files changed

+7
-7
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1131,7 +1131,7 @@ class Handler:
11311131
# define any handler methods for HTTP/gRPC workloads here
11321132
```
11331133

1134-
When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
1134+
When explicit model paths are specified in the Python handler's Nucleus configuration, Nucleus provides a `model_client` to your Handler's constructor. `model_client` is an instance of [ModelClient](https://github.com/cortexlabs/nucleus/tree/0.1/src/cortex/cortex_internal/lib/client/python.py) that is used to load model(s) (it calls the `load_model()` method of your handler, which must be defined when using explicit model paths). It should be saved as an instance variable in your handler class, and your handler method should call `model_client.get_model()` to load your model for inference. Preprocessing of the JSON/gRPC payload and postprocessing of predictions can be implemented in your handler method as well.
11351135

11361136
When multiple models are defined using the Handler's `multi_model_reloading` field, the `model_client.get_model()` method expects an argument `model_name` which must hold the name of the model that you want to load (for example: `self.client.get_model("text-generator")`). There is also an optional second argument to specify the model version.
11371137

@@ -1305,7 +1305,7 @@ class Handler:
13051305
# define any handler methods for HTTP/gRPC workloads here
13061306
```
13071307

1308-
Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/master/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
1308+
Nucleus provides a `tensorflow_client` to your Handler's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/nucleus/tree/0.1/src/cortex/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your handler method should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your handler method as well.
13091309

13101310
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
13111311

nucleus/templates/handler.Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# to replace when building the dockerfile
22
FROM $BASE_IMAGE
3-
ENV CORTEX_MODEL_SERVER_VERSION=master
3+
ENV CORTEX_MODEL_SERVER_VERSION=0.1.0
44

55
RUN apt-get update -qq && apt-get install -y -q \
66
build-essential \

nucleus/templates/tfs.Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
FROM $BASE_IMAGE
2-
ENV CORTEX_MODEL_SERVER_VERSION=master
2+
ENV CORTEX_MODEL_SERVER_VERSION=0.1.0

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
import setuptools
1616

17-
CORTEX_MODEL_SERVER_VERSION = "master"
17+
CORTEX_MODEL_SERVER_VERSION = "0.1.0"
1818

1919
with open("requirements.txt") as fp:
2020
install_requires = fp.read()

src/cortex/cortex_internal/consts.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@
1313
# limitations under the License.
1414

1515
SINGLE_MODEL_NAME = "_cortex_default"
16-
MODEL_SERVER_VERSION = "master"
16+
MODEL_SERVER_VERSION = "0.1.0"

src/cortex/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
import pkg_resources
1818
from setuptools import setup, find_packages
1919

20-
CORTEX_MODEL_SERVER_VERSION = "master"
20+
CORTEX_MODEL_SERVER_VERSION = "0.1.0"
2121

2222
with pathlib.Path("cortex_internal.requirements.txt").open() as requirements_txt:
2323
install_requires = [

0 commit comments

Comments
 (0)