Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.

Commit 4450287

Browse files
sekyondaMetasekyondalxningagunapal
authored
Fixing FAQs doc per issue #2204 (#2351)
* Update index.md Update to fix a broken link in index.md where the trailing .md is cut off from the management_api.md. Added an anchor link to force the .md to show up. * Update to index.md Update to index.md to fix several links ending in .md that sphinx is breaking. Added anchor links to each link and a corresponding anchor in the affected doc. Tested locally and seems to be working. * Update inference_api.md * Updated typos Fixed typos and updated wordslist.txt * Update wordlist.txt * FAQs Updates Updated a couple of broken links on the FAQ site per issue #2204 * updates to resolve links * Update some links in index Updated some links in index.md to go to the pytorch html page instead of github. This is a nicer fix for the .md sphinx issue --------- Co-authored-by: sekyonda <7411+sekyonda@users.noreply.ghe.oculus-rep.com> Co-authored-by: lxning <23464292+lxning@users.noreply.github.com> Co-authored-by: Ankith Gunapal <agunapal@ischool.Berkeley.edu>
1 parent 77ca82d commit 4450287

File tree

3 files changed

+34
-35
lines changed

3 files changed

+34
-35
lines changed

docs/FAQs.md

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ Torchserve API's are compliant with the [OpenAPI specification 3.0](https://swag
1515

1616
### How to use Torchserve in production?
1717
Depending on your use case, you will be able to deploy torchserve in production using following mechanisms.
18-
> Standalone deployment. Refer [TorchServe docker documentation](https://github.com/pytorch/serve/tree/master/docker#readme) or [TorchServe documentation](https://github.com/pytorch/serve/tree/master/docs#readme)
19-
> Cloud based deployment. Refer [TorchServe kubernetes documentation](https://github.com/pytorch/serve/tree/master/kubernetes#readme) or [TorchServe cloudformation documentation](https://github.com/pytorch/serve/tree/master/examples/cloudformation/README.md)
18+
> Standalone deployment. Refer [TorchServe docker documentation](https://github.com/pytorch/serve/tree/master/docker#readme) or [TorchServe documentation](README.md)
19+
> Cloud based deployment. Refer [TorchServe kubernetes documentation](https://github.com/pytorch/serve/tree/master/kubernetes#readme) or [TorchServe cloudformation documentation](https://github.com/pytorch/serve/tree/master/examples/cloudformation/README.md#cloudformation)
2020
2121

2222
### What's difference between Torchserve and a python web app using web frameworks like Flask, Django?
@@ -25,7 +25,7 @@ Torchserve's main purpose is to serve models via http REST APIs, Torchserve is n
2525
Relevant issues: [[581](https://github.com/pytorch/serve/issues/581),[569](https://github.com/pytorch/serve/issues/569)]
2626

2727
### Are there any sample Models available?
28-
Various models are provided in Torchserve out of the box. Checkout out Torchserve [Model Zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md) for list of all available models. You can also check out the [examples](https://github.com/pytorch/serve/tree/master/examples) folder.
28+
Various models are provided in Torchserve out of the box. Checkout out Torchserve [Model Zoo](model_zoo.md) for list of all available models. You can also check out the [examples](https://github.com/pytorch/serve/tree/master/examples) folder.
2929

3030
### Does Torchserve support other models based on programming languages other than python?
3131
No, As of now only python based models are supported.
@@ -40,39 +40,39 @@ If a model converts international language string to bytes, client needs to use
4040

4141
## Deployment and config
4242
Relevant documents.
43-
- [Torchserve configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md)
44-
- [Model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
45-
- [Snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md)
46-
- [Docker](../docker/README.md)
43+
- [Torchserve configuration](configuration.md)
44+
- [Model zoo](model_zoo.md)
45+
- [Snapshot](snapshot.md)
46+
- [Docker](https://github.com/pytorch/serve/blob/master/docker/README.md#docker-readme)
4747

4848
### Can I run Torchserve APIs on ports other than the default 8080 & 8081?
4949
Yes, Torchserve API ports are configurable using a properties file or environment variable.
50-
Refer [configuration.md](configuration.md) for more details.
50+
Refer to [configuration](configuration.md) for more details.
5151

5252

5353
### How can I resolve model specific python dependency?
5454
You can provide a `requirements.txt` while creating a mar file using "--requirements-file/ -r" flag. Also, you can add dependency files using "--extra-files" flag.
55-
Refer [configuration.md](configuration.md) for more details.
55+
Refer to [configuration](configuration.md) for more details.
5656

5757
### Can I deploy Torchserve in Kubernetes?
5858
Yes, you can deploy Torchserve in Kubernetes using Helm charts.
59-
Refer [Kubernetes deployment ](../kubernetes/README.md) for more details.
59+
Refer [Kubernetes deployment ](https://github.com/pytorch/serve/blob/master/kubernetes/README.md#torchserve-kubernetes) for more details.
6060

6161
### Can I deploy Torchserve with AWS ELB and AWS ASG?
6262
Yes, you can deploy Torchserve on a multi-node ASG AWS EC2 cluster. There is a cloud formation template available [here](https://github.com/pytorch/serve/blob/master/examples/cloudformation/ec2-asg.yaml) for this type of deployment. Refer [ Multi-node EC2 deployment behind Elastic LoadBalancer (ELB)](https://github.com/pytorch/serve/tree/master/examples/cloudformation/README.md#multi-node-ec2-deployment-behind-elastic-loadbalancer-elb) more details.
6363

6464
### How can I backup and restore Torchserve state?
6565
TorchServe preserves server runtime configuration across sessions such that a TorchServe instance experiencing either a planned or unplanned service stop can restore its state upon restart. These saved runtime configuration files can be used for backup and restore.
66-
Refer [TorchServe model snapshot](snapshot.md#torchserve-model-snapshot) for more details.
66+
Refer to [TorchServe model snapshot](snapshot.md) for more details.
6767

6868
### How can I build a Torchserve image from source?
69-
Torchserve has a utility [script](../docker/build_image.sh) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
69+
Torchserve has a utility [script](https://github.com/pytorch/serve/blob/master/docker/build_image.sh) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
7070

7171
All these docker images can be created using `build_image.sh` with appropriate options.
7272

7373
Run `./build_image.sh --help` for all available options.
7474

75-
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image) for more details.
75+
Refer to [Create Torchserve docker image from source](https://github.com/pytorch/serve/blob/master/docker/README.md#create-torchserve-docker-image) for more details.
7676

7777
### How to build a Torchserve image for a specific branch or commit id?
7878
To create a Docker image for a specific branch, use the following command:
@@ -91,11 +91,11 @@ The image created using Dockerfile.dev has Torchserve installed from source wher
9191
TorchServe looks for the config.property file according to the order listed in the [doc](https://github.com/pytorch/serve/blob/master/docs/configuration.md#configproperties-file). There is no override mechanism.
9292

9393
### What are model_store, load_models, models?
94-
- model_store: A mandatory argument during TorchServe start. It can be either defined in config.property or overridden by TorchServe command line option "[--model-store](https://github.com/pytorch/serve/blob/master/docs/configuration.md#command-line-parameters)".
94+
- model_store: A mandatory argument during TorchServe start. It can be either defined in config.property or overridden by TorchServe command line option "[--model-store](configuration.md)".
9595

96-
- load_models: An optional argument during TorchServe start. It can be either defined in config.property or overridden by TorchServe command line option "[--models](https://github.com/pytorch/serve/blob/master/docs/configuration.md#command-line-parameters)".
96+
- load_models: An optional argument during TorchServe start. It can be either defined in config.property or overridden by TorchServe command line option "[--models](configuration.md)".
9797

98-
- [models](https://github.com/pytorch/serve/blob/master/docs/configuration.md#command-line-parameters): Defines a list of models' configuration in config.property. A model's configuration can be overridden by [management API](https://github.com/pytorch/serve/blob/master/docs/management_api.md#register-a-model). It does not decide which models will be loaded during TorchServe start. There is no relationship b.w "models" and "load_models" (ie. TorchServe command line option [--models](https://github.com/pytorch/serve/blob/master/docs/configuration.md#command-line-parameters)).
98+
- [models](configuration.md): Defines a list of models' configuration in config.property. A model's configuration can be overridden by [management API](management_api.md). It does not decide which models will be loaded during TorchServe start. There is no relationship b.w "models" and "load_models" (ie. TorchServe command line option [--models](configuration.md)).
9999

100100
###
101101

@@ -108,43 +108,43 @@ You can use any tool like Postman, Insomnia or even use a python script to do so
108108

109109
### How can I add a custom API to an existing framework?
110110
You can add a custom API using **plugins SDK** available in Torchserve.
111-
Refer to [serving sdk](../serving-sdk) and [plugins](../plugins) for more details.
111+
Refer to [serving sdk](https://github.com/pytorch/serve/tree/master/serving-sdk) and [plugins](https://github.com/pytorch/serve/tree/master/plugins) for more details.
112112

113113
### How can pass multiple images in Inference request call to my model?
114114
You can provide multiple data in a single inference request to your custom handler as a key-value pair in the `data` object.
115-
Refer [this](https://github.com/pytorch/serve/issues/529#issuecomment-658012913) for more details.
115+
Refer to [this issue](https://github.com/pytorch/serve/issues/529#issuecomment-658012913) for more details.
116116

117117
## Handler
118118
Relevant documents
119-
- [Default handlers](default_handlers.md#torchserve-default-inference-handlers)
120-
- [Custom Handlers](custom_service.md#custom-handlers)
119+
- [Default handlers](default_handlers.md)
120+
- [Custom Handlers](custom_service.md)
121121

122122
### How do I return an image output for a model?
123123
You would have to write a custom handler and modify the postprocessing to return the image
124-
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
124+
Refer to [custom service documentation](custom_service.md) for more details.
125125

126126
### How to enhance the default handlers?
127127
Write a custom handler that extends the default handler and just override the methods to be tuned.
128-
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.
128+
Refer to [custom service documentation](custom_service.md) for more details.
129129

130130
### Do I always have to write a custom handler or are there default ones that I can use?
131131
Yes, you can deploy your model with no-code/ zero code by using builtin default handlers.
132-
Refer [default handlers](default_handlers.md#torchserve-default-inference-handlers) for more details.
132+
Refer to [default handlers](default_handlers.md) for more details.
133133

134134
### Is it possible to deploy Hugging Face models?
135135
Yes, you can deploy Hugging Face models using a custom handler.
136-
Refer [HuggingFace_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md) for example.
136+
Refer to [HuggingFace_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md#huggingface-transformers) for example.
137137

138138
## Model-archiver
139139
Relevant documents
140-
- [Model-archiver ](../model-archiver/README.md#torch-model-archiver-for-torchserve)
141-
- [Docker Readme](../docker/README.md)
140+
- [Model-archiver ](https://github.com/pytorch/serve/blob/master/model-archiver/README.md#torch-model-archiver-for-torchserve)
141+
- [Docker Readme](https://github.com/pytorch/serve/blob/master/docker/README.md#docker-readme)
142142

143143
### What is a mar file?
144144
A mar file is a zip file consisting of all model artifacts with the ".mar" extension. The cmd-line utility `torch-model-archiver` is used to create a mar file.
145145

146146
### How can create mar file using Torchserve docker container?
147-
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](../docker/README.md#create-torch-model-archiver-from-container).
147+
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container).
148148

149149
### Can I add multiple serialized files in single mar file?
150150
Currently `torch-model-archiver` allows supplying only one serialized file with `--serialized-file` parameter while creating the mar. However, you can supply any number and any type of file with `--extra-files` flag. All the files supplied in the mar file are available in `model_dir` location which can be accessed through the context object supplied to the handler's entry point.
@@ -155,7 +155,7 @@ Sample code snippet:
155155
properties = context.system_properties
156156
model_dir = properties.get("model_dir")
157157
```
158-
Refer [Torch model archiver cli](../model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
158+
Refer [Torch model archiver cli](https://github.com/pytorch/serve/blob/master/model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
159159
Relevant issues: [[#633](https://github.com/pytorch/serve/issues/633)]
160160

161161
### Can I download and register model using s3 presigned v4 url?

docs/index.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,19 @@ TorchServe is a performant, flexible and easy to use tool for serving PyTorch mo
44

55

66
## ⚡ Why TorchServe
7-
* [Model Management API](https://github.com/pytorch/serve/blob/master/docs/management_api.md#management-api): multi model management with optimized worker to model allocation
8-
* [Inference API](https://github.com/pytorch/serve/blob/master/docs/inference_api.md#inference-api): REST and gRPC support for batched inference
7+
* [Model Management API](management_api.md): multi model management with optimized worker to model allocation
8+
* [Inference API](inference_api.md): REST and gRPC support for batched inference
99
* [TorchServe Workflows](https://github.com/pytorch/serve/blob/master/examples/Workflows/README.md#workflow-examples): deploy complex DAGs with multiple interdependent models
1010
* Default way to serve PyTorch models in
1111
* [Kubeflow](https://v0-5.kubeflow.org/docs/components/pytorchserving/)
1212
* [MLflow](https://github.com/mlflow/mlflow-torchserve)
1313
* [Sagemaker](https://aws.amazon.com/blogs/machine-learning/serving-pytorch-models-in-production-with-the-amazon-sagemaker-native-torchserve-integration/)
1414
* [Kserve](https://kserve.github.io/website/0.8/modelserving/v1beta1/torchserve/): Supports both v1 and v2 API
1515
* [Vertex AI](https://cloud.google.com/blog/topics/developers-practitioners/pytorch-google-cloud-how-deploy-pytorch-models-vertex-ai)
16-
* Export your model for optimized inference. Torchscript out of the box, [ORT and ONNX](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md#performance-guide), [IPEX](https://github.com/pytorch/serve/tree/master/examples/intel_extension_for_pytorch), [TensorRT](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md#performance-guide), [FasterTransformer](https://github.com/pytorch/serve/tree/master/examples/FasterTransformer_HuggingFace_Bert)
17-
* [Performance Guide](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md#performance-guide): builtin support to optimize, benchmark and profile PyTorch and TorchServe performance
16+
* Export your model for optimized inference. Torchscript out of the box, [ORT and ONNX](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md#performance-guide), [IPEX](https://github.com/pytorch/serve/tree/master/examples/intel_extension_for_pytorch), [TensorRT](performance_guide.md), [FasterTransformer](https://github.com/pytorch/serve/tree/master/examples/FasterTransformer_HuggingFace_Bert)
17+
* [Performance Guide](performance_guide.md): builtin support to optimize, benchmark and profile PyTorch and TorchServe performance
1818
* [Expressive handlers](https://github.com/pytorch/serve/blob/master/CONTRIBUTING.md#contributing-to-torchServe): An expressive handler architecture that makes it trivial to support inferencing for your usecase with [many supported out of the box](https://github.com/pytorch/serve/tree/master/ts/torch_handler)
19-
* [Metrics API](https://github.com/pytorch/serve/blob/master/docs/metrics.md#torchserve-metrics): out of box support for system level metrics with [Prometheus exports](https://github.com/pytorch/serve/tree/master/examples/custom_metrics), custom metrics and PyTorch profiler support
20-
19+
* [Metrics API](metrics.md): out of box support for system level metrics with [Prometheus exports](https://github.com/pytorch/serve/tree/master/examples/custom_metrics), custom metrics and PyTorch profiler support
2120
## 🤔 How does TorchServe work
2221

2322
* [Serving Quick Start](https://github.com/pytorch/serve/blob/master/README.md#serve-a-model) - Basic server usage tutorial

ts_scripts/spellcheck_conf/wordlist.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1051,4 +1051,4 @@ largemodels
10511051
torchpippy
10521052
InferenceSession
10531053
maxRetryTimeoutInSec
1054-
neuronx
1054+
neuronx

0 commit comments

Comments
 (0)