Skip to content

Version Bumps #42

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
narendasan opened this issue Apr 23, 2020 · 0 comments · Fixed by #48
Closed

Version Bumps #42

narendasan opened this issue Apr 23, 2020 · 0 comments · Fixed by #48
Assignees
Labels
feature request New feature or request
Milestone

Comments

@narendasan
Copy link
Collaborator

With Libtorch 1.5.0 out we are going to be updating soon:

  • Libtorch 1.5.0
  • CUDA 10.2
  • cuDNN 7.6
  • TensorRT 7.0
@narendasan narendasan added the feature request New feature or request label Apr 23, 2020
@narendasan narendasan added this to the v0.1.0 milestone Apr 23, 2020
@narendasan narendasan self-assigned this Apr 23, 2020
narendasan added a commit that referenced this issue Apr 29, 2020
7.0.0)

- Closes #42
- Issue #1 is back, unknown root cause, will follow up with the PyTorch
Team
- Closes #14: The default build now requires users to grab the tarballs
from the NVIDIA website to support hermetic builds, may look at some
methods to smooth this out later. The old method is still available
- New operators need to be implemented to support MobileNet in 1.5.0
(blocks merge into master)

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
frank-wei pushed a commit that referenced this issue Jun 4, 2022
…xcept for NNPI-related items (#42)

Summary: Pull Request resolved: pytorch/fx2trt#42

Reviewed By: scottxu0730, houseroad, wushirong

Differential Revision: D35266329

fbshipit-source-id: a1fcb04dfacce03d9b7d3fff7758f1956631166f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant