Skip to content

Commit 46ee707

Browse files
committed
Update version to 0.32.0
1 parent 49bc94c commit 46ee707

28 files changed

+80
-80
lines changed

build/build-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.32.0
2323

2424
image=$1
2525

build/cli.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.32.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.32.0
2121

2222
image=$1
2323

dev/registry.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616

17-
CORTEX_VERSION=master
17+
CORTEX_VERSION=0.32.0
1818

1919
set -eo pipefail
2020

docs/clients/install.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.31.1):
12+
To install or upgrade to a specific version (e.g. v0.32.0):
1313

1414
```bash
15-
pip install cortex==0.31.1
15+
pip install cortex==0.32.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.31.1 (Note the "v"):
29-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.31.1/get-cli.sh)"
28+
# For example to download CLI version 0.32.0 (Note the "v"):
29+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.32.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Deploy an API.
8888

8989
**Arguments**:
9090

91-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
91+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.32/ for schema.
9292
- `predictor` - A Cortex Predictor class implementation. Not required for TaskAPI/TrafficSplitter kinds.
9393
- `task` - A callable class/function implementation. Not required for RealtimeAPI/BatchAPI/TrafficSplitter kinds.
9494
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.

docs/clusters/management/create.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -99,26 +99,26 @@ The docker images used by the cluster can also be overridden. They can be config
9999
100100
<!-- CORTEX_VERSION_BRANCH_STABLE -->
101101
```yaml
102-
image_operator: quay.io/cortexlabs/operator:master
103-
image_manager: quay.io/cortexlabs/manager:master
104-
image_downloader: quay.io/cortexlabs/downloader:master
105-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
106-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
107-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
108-
image_inferentia: quay.io/cortexlabs/inferentia:master
109-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
110-
image_nvidia: quay.io/cortexlabs/nvidia:master
111-
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
112-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
113-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
114-
image_prometheus: quay.io/cortexlabs/prometheus:master
115-
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
116-
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
117-
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
118-
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
119-
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
120-
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
121-
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
122-
image_grafana: quay.io/cortexlabs/grafana:master
123-
image_event_exporter: quay.io/cortexlabs/event-exporter:master
102+
image_operator: quay.io/cortexlabs/operator:0.32.0
103+
image_manager: quay.io/cortexlabs/manager:0.32.0
104+
image_downloader: quay.io/cortexlabs/downloader:0.32.0
105+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.32.0
106+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.32.0
107+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.32.0
108+
image_inferentia: quay.io/cortexlabs/inferentia:0.32.0
109+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.32.0
110+
image_nvidia: quay.io/cortexlabs/nvidia:0.32.0
111+
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.32.0
112+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.32.0
113+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.32.0
114+
image_prometheus: quay.io/cortexlabs/prometheus:0.32.0
115+
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.32.0
116+
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.32.0
117+
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.32.0
118+
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.32.0
119+
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.32.0
120+
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.32.0
121+
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.32.0
122+
image_grafana: quay.io/cortexlabs/grafana:0.32.0
123+
image_event_exporter: quay.io/cortexlabs/event-exporter:0.32.0
124124
```

docs/workloads/async/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ predictor:
2626
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
2727
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
2828
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
29-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master, quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:master based on compute)
29+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.32.0, quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.32.0 based on compute)
3030
env: <string: string> # dictionary of environment variables
3131
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
3232
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ predictor:
4949
signature_key: # name of the signature def to use for prediction (required if your model has more than one signature def)
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master, quay.io/cortexlabs/tensorflow-serving-gpu:master, or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
52+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.32.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.32.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.32.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.32.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -72,7 +72,7 @@ predictor:
7272
path: <string> # S3 path to an exported model directory (e.g. s3://my-bucket/exported_model/) (either this, 'dir', or 'paths' must be provided)
7373
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
7474
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
75-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:master or quay.io/cortexlabs/onnx-predictor-gpu:master based on compute)
75+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.32.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.32.0 based on compute)
7676
env: <string: string> # dictionary of environment variables
7777
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
7878
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/async/predictors.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ class TensorFlowPredictor:
132132
<!-- CORTEX_VERSION_MINOR -->
133133

134134
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance
135-
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py)
135+
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.32/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py)
136136
that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as
137137
an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make
138138
an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions
@@ -191,7 +191,7 @@ class ONNXPredictor:
191191
<!-- CORTEX_VERSION_MINOR -->
192192

193193
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance
194-
of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py)
194+
of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.32/pkg/cortex/serve/cortex_internal/lib/client/onnx.py)
195195
that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in
196196
your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your
197197
exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in

docs/workloads/batch/configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ predictor:
1919
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
2020
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
2121
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
22-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8 based on compute)
22+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.32.0 or quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.2-cudnn8 based on compute)
2323
env: <string: string> # dictionary of environment variables
2424
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
2525
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ predictor:
4949
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
52+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.32.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.32.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.32.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -75,7 +75,7 @@ predictor:
7575
...
7676
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
7777
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
78-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:master or quay.io/cortexlabs/onnx-predictor-gpu:master based on compute)
78+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.32.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.32.0 based on compute)
7979
env: <string: string> # dictionary of environment variables
8080
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
8181
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/batch/predictors.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ class TensorFlowPredictor:
143143
```
144144

145145
<!-- CORTEX_VERSION_MINOR -->
146-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
146+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.32/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
147147

148148
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).
149149

@@ -204,7 +204,7 @@ class ONNXPredictor:
204204
```
205205

206206
<!-- CORTEX_VERSION_MINOR -->
207-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
207+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.32/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
208208

209209
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
210210

docs/workloads/dependencies/images.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -11,27 +11,27 @@ mkdir my-api && cd my-api && touch Dockerfile
1111
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
1212

1313
<!-- CORTEX_VERSION_BRANCH_STABLE x12 -->
14-
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu:master`
14+
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu:0.32.0`
1515
* Python Predictor (GPU): choose one of the following:
16-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda10.0-cudnn7`
17-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda10.1-cudnn7`
18-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda10.1-cudnn8`
19-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn7`
20-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8`
21-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda11.0-cudnn8`
22-
* `quay.io/cortexlabs/python-predictor-gpu:master-cuda11.1-cudnn8`
23-
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf:master`
24-
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor:master`
25-
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu:master`
26-
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu:master`
16+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.0-cudnn7`
17+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.1-cudnn7`
18+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.1-cudnn8`
19+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.2-cudnn7`
20+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda10.2-cudnn8`
21+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda11.0-cudnn8`
22+
* `quay.io/cortexlabs/python-predictor-gpu:0.32.0-cuda11.1-cudnn8`
23+
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf:0.32.0`
24+
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor:0.32.0`
25+
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu:0.32.0`
26+
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu:0.32.0`
2727

2828
The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, and installs 3 packages. `tree` is a system package and `pandas` and `rdkit` are Python packages.
2929

3030
<!-- CORTEX_VERSION_BRANCH_STABLE -->
3131
```dockerfile
3232
# Dockerfile
3333

34-
FROM quay.io/cortexlabs/python-predictor-cpu:master
34+
FROM quay.io/cortexlabs/python-predictor-cpu:0.32.0
3535

3636
RUN apt-get update \
3737
&& apt-get install -y tree \
@@ -49,7 +49,7 @@ If you need to upgrade the Python Runtime version on your image, you can follow
4949
```Dockerfile
5050
# Dockerfile
5151

52-
FROM quay.io/cortexlabs/python-predictor-cpu:master
52+
FROM quay.io/cortexlabs/python-predictor-cpu:0.32.0
5353

5454
# upgrade python runtime version
5555
RUN conda update -n base -c defaults conda

0 commit comments

Comments
 (0)