Skip to content

Add lowering of aten.Int.Tensor op. #387

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 1, 2021

Conversation

pashu123
Copy link
Member

The lowering of aten.Int.Tensor op has been added.
The changes has been made as a part of convert-torch-to-linalg pass.

@pashu123
Copy link
Member Author

Hi @ALL, I have tried to come up with the lowering seeing pytorch/TensorRT#513. I was not able to find the respective torch python function that can emit aten.Int.Tensor op.

@pashu123 pashu123 force-pushed the prashant/int_tensor branch from fb046bd to 16f543e Compare October 28, 2021 05:42
@ramiro050
Copy link
Collaborator

ramiro050 commented Oct 28, 2021

I was not able to find the respective torch python function that can emit aten.Int.Tensor op.

The python function that generates the aten.Int.Tensor op is int(..). If you do something like

def forward(self, x):
    return int(x)

where x is a torch tensor, you get the following MLIR:

func private @__torch__.MyModule.forward(%arg0: !torch.nn.Module<"__torch__.MyModule">, %arg1: !torch.tensor {torch.type_bound = !torch.vtensor<[],f32>}) -> !torch.int {
    %1 = torch.operator "aten.Int.Tensor"(%arg1) : (!torch.tensor) -> !torch.int
    return %1 : !torch.int
  }

Although, I don't know if the e2e_testing currently supports returning ints. You could re-wrap the value in a tensor again before returning, if this is an issue.

@pashu123
Copy link
Member Author

I was not able to find the respective torch python function that can emit aten.Int.Tensor op.

The python function that generates the aten.Int.Tensor op is int(..). If you do something like

def forward(self, x):
    return int(x)

where x is a torch tensor, you get the following MLIR:

func private @__torch__.MyModule.forward(%arg0: !torch.nn.Module<"__torch__.MyModule">, %arg1: !torch.tensor {torch.type_bound = !torch.vtensor<[],f32>}) -> !torch.int {
    %1 = torch.operator "aten.Int.Tensor"(%arg1) : (!torch.tensor) -> !torch.int
    return %1 : !torch.int
  }

Although, I don't know if the e2e_testing currently supports returning ints. You could re-wrap the value in a tensor again before returning, if this is an issue.

Hey, thanks for this. Sure, let me check.

@pashu123 pashu123 force-pushed the prashant/int_tensor branch from 16f543e to e65aa17 Compare October 31, 2021 18:29
@pashu123
Copy link
Member Author

pashu123 commented Nov 1, 2021

I was not able to find the respective torch python function that can emit aten.Int.Tensor op.

The python function that generates the aten.Int.Tensor op is int(..). If you do something like

def forward(self, x):
    return int(x)

where x is a torch tensor, you get the following MLIR:

func private @__torch__.MyModule.forward(%arg0: !torch.nn.Module<"__torch__.MyModule">, %arg1: !torch.tensor {torch.type_bound = !torch.vtensor<[],f32>}) -> !torch.int {
    %1 = torch.operator "aten.Int.Tensor"(%arg1) : (!torch.tensor) -> !torch.int
    return %1 : !torch.int
  }

Although, I don't know if the e2e_testing currently supports returning ints. You could re-wrap the value in a tensor again before returning if this is an issue.

Yes, this seems to be an issue. The return type should be Memref type.

        Lowering Linalg-on-Tensors IR to LLVM with RefBackend failed with the following diagnostics:                                                                                  
        error: return value must be memref type                                                
        note: see current operation: "std.return"(%1) : (i64) -> () 

I can rewrap the integer into a tensor and return, but there is another op that pops up "torch.aten.tensor.int"(%8, %6, %2, %5) : (!torch.int, !torch.int, !torch.none, !torch.bool) -> !torch.vtensor<[1],si64> and its lowering is missing.

@pashu123 pashu123 force-pushed the prashant/int_tensor branch 2 times, most recently from be47b7c to 1d6b97b Compare November 1, 2021 16:17
The lowering of `aten.Int.Tensor` op has been added.
The changes has been made as a part of `convert-torch-to-linalg` pass.

Signed-off-by: Prashant Kumar <[email protected]>
@pashu123 pashu123 force-pushed the prashant/int_tensor branch from 1d6b97b to 96abfca Compare November 1, 2021 16:20
@pashu123 pashu123 merged commit 53b4275 into llvm:main Nov 1, 2021
def forward(self, x, y):
# This is a workaround for not returning scalar value.
a = int(x)
return y.add(y, alpha=a)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is pretty clever :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramiro050 All thanks to @cathyzhyi.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was originally @silvasean 's idea!

qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants