Skip to content

Commit 09c40ff

Browse files
committed
Update version to 0.27.0
1 parent e7a7124 commit 09c40ff

27 files changed

+74
-74
lines changed

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.27.0
2323

2424
image=$1
2525
dir="${ROOT}/images/${image/-slim}"

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.27.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.27.0
2121

2222
image=$1
2323

docs/clients/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.26.0):
12+
To install or upgrade to a specific version (e.g. v0.27.0):
1313

1414
```bash
15-
pip install cortex==0.26.0
15+
pip install cortex==0.27.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.26.0 (Note the "v"):
29-
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.26.0/get-cli.sh)"
28+
# For example to download CLI version 0.27.0 (Note the "v"):
29+
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.27.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ Deploy an API.
9191

9292
**Arguments**:
9393

94-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
94+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.27/ for schema.
9595
- `predictor` - A Cortex Predictor class implementation. Not required when deploying a traffic splitter.
9696
- `task` - A callable class/function implementation. Not required for RealtimeAPI/BatchAPI/TrafficSplitter kinds.
9797
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.

docs/clusters/aws/install.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -89,17 +89,17 @@ The docker images used by the Cortex cluster can also be overridden, although th
8989
9090
<!-- CORTEX_VERSION_BRANCH_STABLE -->
9191
```yaml
92-
image_operator: quay.io/cortexlabs/operator:master
93-
image_manager: quay.io/cortexlabs/manager:master
94-
image_downloader: quay.io/cortexlabs/downloader:master
95-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
96-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
97-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
98-
image_inferentia: quay.io/cortexlabs/inferentia:master
99-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
100-
image_nvidia: quay.io/cortexlabs/nvidia:master
101-
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
102-
image_statsd: quay.io/cortexlabs/statsd:master
103-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
104-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
92+
image_operator: quay.io/cortexlabs/operator:0.27.0
93+
image_manager: quay.io/cortexlabs/manager:0.27.0
94+
image_downloader: quay.io/cortexlabs/downloader:0.27.0
95+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.27.0
96+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.27.0
97+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.27.0
98+
image_inferentia: quay.io/cortexlabs/inferentia:0.27.0
99+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.27.0
100+
image_nvidia: quay.io/cortexlabs/nvidia:0.27.0
101+
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.27.0
102+
image_statsd: quay.io/cortexlabs/statsd:0.27.0
103+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.27.0
104+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.27.0
105105
```

docs/clusters/gcp/install.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,11 @@ The docker images used by the Cortex cluster can also be overridden, although th
7171
7272
<!-- CORTEX_VERSION_BRANCH_STABLE -->
7373
```yaml
74-
image_operator: quay.io/cortexlabs/operator:master
75-
image_manager: quay.io/cortexlabs/manager:master
76-
image_downloader: quay.io/cortexlabs/downloader:master
77-
image_statsd: quay.io/cortexlabs/statsd:master
78-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
79-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
80-
image_pause: quay.io/cortexlabs/pause:master
74+
image_operator: quay.io/cortexlabs/operator:0.27.0
75+
image_manager: quay.io/cortexlabs/manager:0.27.0
76+
image_downloader: quay.io/cortexlabs/downloader:0.27.0
77+
image_statsd: quay.io/cortexlabs/statsd:0.27.0
78+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.27.0
79+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.27.0
80+
image_pause: quay.io/cortexlabs/pause:0.27.0
8181
```

docs/workloads/batch/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
1212
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
1313
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
14-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
14+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.27.0 or quay.io/cortexlabs/python-predictor-gpu:0.27.0 based on compute)
1515
env: <string: string> # dictionary of environment variables
1616
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
1717
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -46,8 +46,8 @@
4646
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
4747
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
4848
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
49-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
50-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
49+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.27.0)
50+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.27.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.27.0 based on compute)
5151
env: <string: string> # dictionary of environment variables
5252
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5353
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -77,7 +77,7 @@
7777
...
7878
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
7979
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
80-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
80+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.27.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.27.0 based on compute)
8181
env: <string: string> # dictionary of environment variables
8282
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
8383
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/batch/predictors.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ class TensorFlowPredictor:
143143
```
144144

145145
<!-- CORTEX_VERSION_MINOR -->
146-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
146+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.27/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
147147

148148
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).
149149

@@ -204,7 +204,7 @@ class ONNXPredictor:
204204
```
205205

206206
<!-- CORTEX_VERSION_MINOR -->
207-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
207+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.27/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
208208

209209
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
210210

docs/workloads/dependencies/images.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -11,19 +11,19 @@ mkdir my-api && cd my-api && touch Dockerfile
1111
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
1212

1313
<!-- CORTEX_VERSION_BRANCH_STABLE x12 -->
14-
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:master`
14+
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:0.27.0`
1515
* Python Predictor (GPU): choose one of the following:
16-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.0-cudnn7`
17-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.1-cudnn7`
18-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.1-cudnn8`
19-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.2-cudnn7`
20-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.2-cudnn8`
21-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda11.0-cudnn8`
22-
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda11.1-cudnn8`
23-
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:master`
24-
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:master`
25-
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:master`
26-
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:master`
16+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda10.0-cudnn7`
17+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda10.1-cudnn7`
18+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda10.1-cudnn8`
19+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda10.2-cudnn7`
20+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda10.2-cudnn8`
21+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda11.0-cudnn8`
22+
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.27.0-cuda11.1-cudnn8`
23+
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:0.27.0`
24+
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:0.27.0`
25+
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:0.27.0`
26+
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:0.27.0`
2727

2828
Note: the images listed above use the `-slim` suffix; Cortex's default API images are not `-slim`, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a `-slim` Predictor image will result in a smaller image size.
2929

@@ -33,7 +33,7 @@ The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, a
3333
```dockerfile
3434
# Dockerfile
3535

36-
FROM quay.io/cortexlabs/python-predictor-cpu-slim:master
36+
FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.27.0
3737

3838
RUN apt-get update \
3939
&& apt-get install -y tree \

docs/workloads/realtime/configuration.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Python Predictor
44

5-
<!-- CORTEX_VERSION_BRANCH_STABLE x2 -->
5+
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
66
```yaml
77
- name: <string> # API name (required)
88
kind: RealtimeAPI
@@ -25,7 +25,7 @@
2525
threads_per_process: <int> # the number of threads per process (default: 1)
2626
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
2727
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
28-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master, quay.io/cortexlabs/python-predictor-gpu:master or quay.io/cortexlabs/python-predictor-inf:master based on compute)
28+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.27.0, quay.io/cortexlabs/python-predictor-gpu:0.27.0 or quay.io/cortexlabs/python-predictor-inf:0.27.0 based on compute)
2929
env: <string: string> # dictionary of environment variables
3030
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
3131
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -56,7 +56,7 @@
5656
5757
## TensorFlow Predictor
5858
59-
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
59+
<!-- CORTEX_VERSION_BRANCH_STABLE x4 -->
6060
```yaml
6161
- name: <string> # API name (required)
6262
kind: RealtimeAPI
@@ -81,8 +81,8 @@
8181
threads_per_process: <int> # the number of threads per process (default: 1)
8282
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
8383
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
84-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
85-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master, quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
84+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.27.0)
85+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.27.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.27.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.27.0 based on compute)
8686
env: <string: string> # dictionary of environment variables
8787
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
8888
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -133,7 +133,7 @@
133133
threads_per_process: <int> # the number of threads per process (default: 1)
134134
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
135135
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
136-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master, quay.io/cortexlabs/onnx-predictor-cpu:master or quay.io/cortexlabs/onnx-predictor-inf:master based on compute)
136+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.27.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.27.0 based on compute)
137137
env: <string: string> # dictionary of environment variables
138138
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
139139
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

0 commit comments

Comments
 (0)