-
Notifications
You must be signed in to change notification settings - Fork 887
TypeError: TorchserveModel.predict() takes 2 positional arguments but 3 were given #2156
Description
🐛 Describe the bug
TorchserveModel.py in kserve_wrapper does not support the latest definition of predict() introduced in latest versions of kserve and throws the above error
The new predict definition takes 3 parameters - https://github.com/kserve/kserve/blob/15821f336336f73415a5da6968133c8b2b75c45d/python/kserve/kserve/model.py#L210
TorchserveModel.py is yet to support this -
https://github.com/pytorch/serve/blob/master/kubernetes/kserve/kserve_wrapper/TorchserveModel.py#L56
Error logs
2023-02-23 19:29:04.215 354571 root INFO [timing():49] kserve.io.kserve.protocol.rest.v2_endpoints.infer 0.0013878345489501953, ['http_status:500', 'http_method:POST', 'time:wall']
2023-02-23 19:29:04.215 354571 root INFO [timing():49] kserve.io.kserve.protocol.rest.v2_endpoints.infer 0.0013489999999999995, ['http_status:500', 'http_method:POST', 'time:cpu']
2023-02-23 19:29:04.216 354571 uvicorn.error ERROR [run_asgi():376] Exception in ASGI application
Traceback (most recent call last):
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in call
return await self.app(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/fastapi/applications.py", line 270, in call
await super().call(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in call
raise exc
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in call
await self.app(scope, receive, _send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/timing_asgi/middleware.py", line 68, in call
await self.app(scope, receive, send_wrapper)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in call
raise e
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/routing.py", line 706, in call
await route.handle(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/fastapi/routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/kserve/protocol/rest/v2_endpoints.py", line 130, in infer
response, response_headers = await self.dataplane.infer(
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/kserve/protocol/dataplane.py", line 276, in infer
response = await model(body, headers=headers)
File "/home/gavrishtest4/.local/lib/python3.10/site-packages/kserve/model.py", line 116, in call
response = (await self.predict(payload, headers)) if inspect.iscoroutinefunction(self.predict)
TypeError: TorchserveModel.predict() takes 2 positional arguments but 3 were given
Installation instructions
Followed instructions provided here - https://github.com/pytorch/serve/blob/master/kubernetes/kserve/kserve_wrapper/README.md
Model Packaing
Created a resnet50.mar using default parameters and handler
config.properties
inference_address=http://0.0.0.0:8085
management_address=http://0.0.0.0:8085
metrics_address=http://0.0.0.0:8082
grpc_inference_port=7075
grpc_management_port=7076
enable_envvars_config=true
install_py_dep_per_model=true
enable_metrics_api=true
metrics_format=prometheus
NUM_WORKERS=1
number_of_netty_threads=4
job_queue_size=10
model_store=/mnt/models/model_store
model_snapshot={"name":"startup.cfg","modelCount":1,"models":{"resnet50": {"1.0": {"defaultVersion": true,"marName": "resnet50.mar","minWorkers": 6,"maxWorkers": 6,"batchSize": 16,"maxBatchDelay": 200,"responseTimeout": 2000}}}}
Versions
Name: kserve
Version: 0.10.0
Name: torch
Version: 1.13.1+cu117
Name: torchserve
Version: 0.7.1
Repro instructions
Followed instructions provided here - https://github.com/pytorch/serve/blob/master/kubernetes/kserve/kserve_wrapper/README.md
run the kserve_wrapper main.py and hit a infer request for v2 protocol
Possible Solution
Needs update to support latest predict()
method definitions