Skip to content

Commit

Permalink
Update docs to djl 0.23.0 (#954)
Browse files Browse the repository at this point in the history
  • Loading branch information
sindhuvahinis authored Jul 14, 2023
1 parent 943177b commit 9498916
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 20 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,20 +50,20 @@ brew services stop djl-serving
For Ubuntu

```
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.22.1-1_all.deb
sudo dpkg -i djl-serving_0.22.1-1_all.deb
curl -O https://publish.djl.ai/djl-serving/djl-serving_0.23.0-1_all.deb
sudo dpkg -i djl-serving_0.23.0-1_all.deb
```

For Windows

We are considering to create a `chocolatey` package for Windows. For the time being, you can
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.22.1.zip).
download djl-serving zip file from [here](https://publish.djl.ai/djl-serving/serving-0.23.0.zip).

```
curl -O https://publish.djl.ai/djl-serving/serving-0.22.1.zip
unzip serving-0.22.1.zip
curl -O https://publish.djl.ai/djl-serving/serving-0.23.0.zip
unzip serving-0.23.0.zip
# start djl-serving
serving-0.22.1\bin\serving.bat
serving-0.23.0\bin\serving.bat
```

### Docker
Expand Down
16 changes: 8 additions & 8 deletions benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,25 +43,25 @@ sudo snap alias djlbench djl-bench
- Or download .deb package from S3

```
curl -O https://publish.djl.ai/djl-bench/0.22.1/djl-bench_0.22.1-1_all.deb
sudo dpkg -i djl-bench_0.22.1-1_all.deb
curl -O https://publish.djl.ai/djl-bench/0.23.0/djl-bench_0.23.0-1_all.deb
sudo dpkg -i djl-bench_0.23.0-1_all.deb
```

For macOS, centOS or Amazon Linux 2

You can download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip).
You can download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.23.0/benchmark-0.23.0.zip).

```
curl -O https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip
unzip benchmark-0.22.1.zip
rm benchmark-0.22.1.zip
sudo ln -s $PWD/benchmark-0.22.1/bin/benchmark /usr/bin/djl-bench
curl -O https://publish.djl.ai/djl-bench/0.23.0/benchmark-0.23.0.zip
unzip benchmark-0.23.0.zip
rm benchmark-0.23.0.zip
sudo ln -s $PWD/benchmark-0.23.0/bin/benchmark /usr/bin/djl-bench
```

For Windows

We are considering to create a `chocolatey` package for Windows. For the time being, you can
download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.22.1/benchmark-0.22.1.zip).
download djl-bench zip file from [here](https://publish.djl.ai/djl-bench/0.23.0/benchmark-0.23.0.zip).

Or you can run benchmark using gradle:

Expand Down
4 changes: 2 additions & 2 deletions engines/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ The javadocs output is generated in the `build/doc/javadoc` folder.
## Installation
You can pull the Python engine from the central Maven repository by including the following dependency:

- ai.djl.python:python:0.22.1
- ai.djl.python:python:0.23.0

```xml
<dependency>
<groupId>ai.djl.python</groupId>
<artifactId>python</artifactId>
<version>0.22.1</version>
<version>0.23.0</version>
<scope>runtime</scope>
</dependency>
```
Expand Down
6 changes: 3 additions & 3 deletions serving/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1
docker run -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.23.0
```

### GPU
Expand All @@ -42,7 +42,7 @@ mkdir models
cd models
curl -O https://resources.djl.ai/test-models/pytorch/bert_qa_jit.tar.gz

docker run -it --runtime=nvidia --shm-size 2g -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1-pytorch-cu118
docker run -it --runtime=nvidia --shm-size 2g -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.23.0-pytorch-cu118
```

### AWS Inferentia
Expand All @@ -52,5 +52,5 @@ mkdir models
cd models

curl -O https://resources.djl.ai/test-models/pytorch/resnet18_inf2_2_4.tar.gz
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.22.1-pytorch-inf2
docker run --device /dev/neuron0 -it --rm -v $PWD:/opt/ml/model -p 8080:8080 deepjavalibrary/djl-serving:0.23.0-pytorch-inf2
```
2 changes: 1 addition & 1 deletion wlm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can pull the server from the central Maven repository by including the follo
<dependency>
<groupId>ai.djl.serving</groupId>
<artifactId>wlm</artifactId>
<version>0.22.1</version>
<version>0.23.0</version>
</dependency>
```

0 comments on commit 9498916

Please sign in to comment.