You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should either look at containerizing the build, or find someway to pull TensorRT, CUDA etc. with bazel. We already do this with libtorch. This should make it less likely builds will fail because of peoples environment
The text was updated successfully, but these errors were encountered:
7.0.0)
- Closes#42
- Issue #1 is back, unknown root cause, will follow up with the PyTorch
Team
- Closes#14: The default build now requires users to grab the tarballs
from the NVIDIA website to support hermetic builds, may look at some
methods to smooth this out later. The old method is still available
- New operators need to be implemented to support MobileNet in 1.5.0
(blocks merge into master)
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Summary:
Pull Request resolved: https://github.com/pytorch/fx2trt/pull/14
Add support for torch.baddbmm. Add unit test to cover bmm and addmm, both are mapping to other converter insteand of having new acc_op converter.
Reviewed By: yinghai
Differential Revision: D34743279
fbshipit-source-id: efd417b2b494635c63f2ec58daa3fc568f72111a
We should either look at containerizing the build, or find someway to pull TensorRT, CUDA etc. with bazel. We already do this with libtorch. This should make it less likely builds will fail because of peoples environment
The text was updated successfully, but these errors were encountered: