Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perception: turn back to TensorRT for pfe inference #12974

Merged
merged 1 commit into from
Nov 13, 2020

Conversation

jeroldchen
Copy link
Contributor

  1. Turn back to TensorRT from libtorch, because pfe libtorch inference has some stability problems.
  2. Optimize pfe onnx-tensorrt inference time which is even slightly shorter than using libtorch.

@jeroldchen jeroldchen merged commit 295e13e into ApolloAuto:master Nov 13, 2020
@jeroldchen jeroldchen deleted the fix_crash branch November 13, 2020 03:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants