Skip to content

Debugging standalone handlers #1538

Open
@msaroufim

Description

@msaroufim

📚 Documentation

Related to #1504

Often when new users are debugging their model handler files, they'll make some change then package the model and then inspect logs in model_log.log while this is fast enough for advanced users for new users it's very confusing since you don't actually see an error in your console and there's a lag to package models especially if they're large.

Instead the insight is that model handlers are just python files, the way torchserve works is the Java frontend spawns a new process and runs the handler. But we could do that as well with a main() function

So assume your handler.py is a typical handler from examples but also now ends with the below

if __name__ =="__main__":
  handler = CustomHandler()
  handler.initialize()
  x = torch.randn()
  x = handler.preprocess(x)
  x = handle.inference(x)
  x = handler.postprocess(x)
  print(x)

The nice thing about this approach you can now add print() statements, you can add a debugger and you can run a profiler on the python file like scalene handler.py or py-spy handler.py

The limitation of this approach is that it doesn't provide ways of debugging issues that cross between the handler and Java frontend.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions