-
Notifications
You must be signed in to change notification settings - Fork 363
Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0) #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… folding before using the converter Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
7.0.0) - Closes #42 - Issue #1 is back, unknown root cause, will follow up with the PyTorch Team - Closes #14: The default build now requires users to grab the tarballs from the NVIDIA website to support hermetic builds, may look at some methods to smooth this out later. The old method is still available - New operators need to be implemented to support MobileNet in 1.5.0 (blocks merge into master) Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Closes: #31 Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
nice upgrade. so when will your team merge the 1.5 into master? I think pytorch1.5 jit coding-style is much good. |
It will probably be a couple days, we need to address a bunch of changes that PyTorch has added to how they generate the IR. |
FB released 1.5 version for a couple days and I just skim through the jit code, not looking into it deeply. I think the jit mechanism isn't changed much and jit codes have been reconstructed |
elimination pass Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
preferred path now Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
aten::admm Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
…c input size Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
support Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
conversion time through ctx Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
before throwing conversion warning Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
evaluators Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Summary: Pull Request resolved: https://github.com/pytorch/fx2trt/pull/48 Currently, to_dtype can only support 1) to(dtype) This diff makes this op more capable of handling more cases: 2) to(torch.device) #gpu 3) to(torch.device, dtype) #gpu (Note: this ignores all push blocking failures!) Reviewed By: 842974287 Differential Revision: D35331003 fbshipit-source-id: 4dee2b3c7899805fa4f3c91d0a16207241396647
from the NVIDIA website to support hermetic builds, may look at some
methods to smooth this out later. The old method is still available
PR also contains a good amount of the work required to support scripting (#16).
Discovered that using adaptive pooling with dynamic shape does not work. This will have to be a limitation of the system until TensorRT has the ability to configure pooling window sizes at runtime.
#1 is back because of a bug in PyTorch, fix is already in pytorch master, but all tests have been verified.