Skip to content

❓ [Question] How to convert at::tensor into nvinfer1::ITensor? #146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
zhanjw opened this issue Jul 17, 2020 · 3 comments
Closed

❓ [Question] How to convert at::tensor into nvinfer1::ITensor? #146

zhanjw opened this issue Jul 17, 2020 · 3 comments
Labels
question Further information is requested

Comments

@zhanjw
Copy link

zhanjw commented Jul 17, 2020

❓ Question

how to convert at::tensor into nvinfer1::ITensor?

What you have already tried

I tried to run resnet101 using trtorch, however, there was an error when compiling the graph.
As a result of my analysis

TRTorch/core/conversion/converters/impl/element_wise.cpp

"aten::div.Tensor(Tensor self, Tensor other) -> Tensor",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
	// Should implement self / other
	auto self = args[0].ITensor();
	auto other = args[1].ITensor();
	auto div = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kDIV, self, other, util::node_info(n));

	TRTORCH_CHECK(div, "Unable to create div layer from node: " << *n);

	div->setName(util::node_info(n).c_str());
	auto out = ctx->AssociateValueAndTensor(n->outputs()[0], div->getOutput(0));

	LOG_DEBUG("Output tensor shape: " << out->getDimensions());
	return true;
 }

self is the ITensor type
other is the IValue type

Thus, this program exits with an error in determining the type.

auto other = args[1].ITensor();

I know IValue can be unpacked into at::tensor, however add_elementwise requires nvinfer1::ITensor

Environment

Build information about the TRTorch compiler can be found by turning on debug messages

  • CPU Architecture: x86_64
  • OS (e.g., Linux): Ubuntu
  • CUDA version: 10.2 with cudnn 8.0
  • GCC/G++: 7.5.0
@zhanjw zhanjw added the question Further information is requested label Jul 17, 2020
@narendasan
Copy link
Collaborator

narendasan commented Jul 17, 2020

You can add a IConstant layer to freeze the at::Tensor.

            auto t = args[0].unwrapToTensor();
            auto t_weights = Weights(ctx, t);
            auto const_layer = ctx->net->addConstant(t_weights.shape, t_weights.data);
            auto const_tensor = const_layer->getOutput(0);

Const tensor is then a ITensor which contains the value of the static at::Tensor and can be used in elementwise

@narendasan
Copy link
Collaborator

This however seems like the issue is that there is some static value with should be frozen beforehand. Seems similar to #145

@zhanjw
Copy link
Author

zhanjw commented Jul 20, 2020

I added code to "aten::sub" and "aten::div" in trtorch/core/conversion/converters/impl/element_wise.cpp

            "aten::div.Tensor(Tensor self, Tensor other) -> Tensor",
            [](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
                // Should implement self / other
                auto self = args[0].ITensor();
                decltype(self) other;
                if (args[1].isIValue()) {
                    auto other_tensor = args[1].unwrapToTensor().to("cpu"); //Note that this must be loaded to the CPU, otherwise when other_tensor is on the GPU, the subsequent code will report an error and exit.

                    nvinfer1::Dims selfDims = self->getDimensions();
                    auto t = at::ones({selfDims.d[0], selfDims.d[1], selfDims.d[2], selfDims.d[3]}, other_tensor.dtype()) * other_tensor[0]; // Broadcast. Here you need to broadcast explicitly, otherwise an error will be reported because the size is different.

                    auto t_weights = Weights(ctx, t);
                    auto const_layer = ctx->net->addConstant(t_weights.shape, t_weights.data);
                    other = const_layer->getOutput(0); // Thanks for your help.
                }
                else {
                    other = args[1].ITensor();
                }
                //auto other = args[1].ITensor();
                auto div = add_elementwise(ctx, nvinfer1::ElementWiseOperation::kDIV, self, other, util::node_info(n));
                TRTORCH_CHECK(div, "Unable to create div layer from node: " << *n);
                div->setName(util::node_info(n).c_str());
                auto out = ctx->AssociateValueAndTensor(n->outputs()[0], div->getOutput(0));
                LOG_DEBUG("Output tensor shape: " << out->getDimensions());
                return true;
             }

Problem solved. Compile pt model passed. Hope this helps, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants