Skip to content

Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0) #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
May 4, 2020

Conversation

narendasan
Copy link
Collaborator

@narendasan narendasan commented Apr 29, 2020

PR also contains a good amount of the work required to support scripting (#16).

Discovered that using adaptive pooling with dynamic shape does not work. This will have to be a limitation of the system until TensorRT has the ability to configure pooling window sizes at runtime.

#1 is back because of a bug in PyTorch, fix is already in pytorch master, but all tests have been verified.

… folding

before using the converter

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
7.0.0)

- Closes #42
- Issue #1 is back, unknown root cause, will follow up with the PyTorch
Team
- Closes #14: The default build now requires users to grab the tarballs
from the NVIDIA website to support hermetic builds, may look at some
methods to smooth this out later. The old method is still available
- New operators need to be implemented to support MobileNet in 1.5.0
(blocks merge into master)

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan added the WIP Work is in progress, pull request should not be merged yet label Apr 29, 2020
@narendasan narendasan added this to the v0.1.0 milestone Apr 29, 2020
@narendasan narendasan marked this pull request as draft April 29, 2020 01:13
@alanzhai219
Copy link

nice upgrade. so when will your team merge the 1.5 into master? I think pytorch1.5 jit coding-style is much good.

@narendasan
Copy link
Collaborator Author

It will probably be a couple days, we need to address a bunch of changes that PyTorch has added to how they generate the IR.

@alanzhai219
Copy link

FB released 1.5 version for a couple days and I just skim through the jit code, not looking into it deeply. I think the jit mechanism isn't changed much and jit codes have been reconstructed
better. I remember a Nvidia engineer wrote a blog explaining how jit works in pytorch-1.3, which may be helpful.

elimination pass

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
preferred path now

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
aten::admm

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
…c input size

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
support

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
conversion time through ctx

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
before throwing conversion warning

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
evaluators

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan marked this pull request as ready for review May 4, 2020 04:00
@narendasan narendasan removed the WIP Work is in progress, pull request should not be merged yet label May 4, 2020
@narendasan narendasan merged commit e837b7f into master May 4, 2020
@narendasan narendasan deleted the pytorch_1.5.0 branch May 4, 2020 04:09
@narendasan narendasan changed the title [WIP] Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0) Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0) May 4, 2020
frank-wei pushed a commit that referenced this pull request Jun 4, 2022
Summary:
Pull Request resolved: https://github.com/pytorch/fx2trt/pull/48

Currently, to_dtype can only support
1) to(dtype)
This diff makes this op more capable of handling more cases:
2) to(torch.device) #gpu
3) to(torch.device, dtype) #gpu

(Note: this ignores all push blocking failures!)

Reviewed By: 842974287

Differential Revision: D35331003

fbshipit-source-id: 4dee2b3c7899805fa4f3c91d0a16207241396647
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Version Bumps Run standard passes during lowering Grab dependencies automatically
2 participants