Skip to content

Adds basic scripting support #81

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Jun 5, 2020
Merged

Adds basic scripting support #81

merged 18 commits into from
Jun 5, 2020

Conversation

narendasan
Copy link
Collaborator

@narendasan narendasan commented Jun 3, 2020

Description

This PR will add support for scripted feed forward models like ResNet and MobileNet. It also changes how the evaluator system works, now you can set filters on evaluators to limit their scope which we use to separate loops we want to evaluate from loops we want to convert.

New support for:

  • prim::NumToTensor
  • aten::zeros
  • aten::mul.int
  • aten::sub.int
  • aten::__round_to_zero_floordiv
  • aten::slice.t
  • aten::len.t
  • prim::min.self_int

New lowering pass for:

  • Fusing addmm and matmul / add branches

Closes #16

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation and have regenerated the documentation (make html in docsrc)
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

aten::addm op that can be expanded by a later pass

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
late pass to run)

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
evaluators.

This allows developers to specifically blacklist or whiteline specific
cases of node kinds so that they will run on a subset of cases instead
of any instance. This is important in the case of prim::Loop where we
want to evaluate some loops and not others. This also lets us use
function schemas to target node, for instance there is now a
aten::mul.Tensor converter and an aten::mul.int evaluator.
In Tensor cases the converter will be called, in int cases the evaluator
will. We cannot switch to keying on function schema like we do for
converters because some node kinds dont have a schema so we do schema
white listing instead.

This commit also adds the following evaluators:

- aten::mul.int
- aten::sub.int
- aten::__round_to_zero_floordiv
- aten::slice.t
- aten::len.t
- prim::min.self_int

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan marked this pull request as draft June 3, 2020 19:58
@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: evaluators Issues re: Specific op evaluators component: lowering Issues re: The lowering / preprocessing passes labels Jun 3, 2020
favor of an evaluator. Also fixes a bunch of bugs with the evaluators

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
transpose instead of conv

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@github-actions github-actions bot added the component: converters Issues re: Specific op converters label Jun 3, 2020
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
ops

Adds support for:
- prim::shape
- aten::neg
- aten::add
- aten::__getitem__
- aten::append

Fixes:
- prim::min

Removes:
- prim::Loop

Signed-off-by: Naren Dasan <[email protected]>
Note: This is ungaurded currently, loops either must be able to be
evaluated at compile time or the module is not supported. Upcomming work
on RNNs will add support for more types of loops

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
(methods prefixed with _)

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan marked this pull request as ready for review June 5, 2020 00:23
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
@narendasan narendasan merged commit 6d60246 into master Jun 5, 2020
@narendasan narendasan deleted the fuse_addmm_branches branch June 5, 2020 00:43
frank-wei pushed a commit that referenced this pull request Jun 4, 2022
Summary:
Pull Request resolved: https://github.com/pytorch/fx2trt/pull/81

The observer.observe calling signature is observe(module, inputs). But doing partial(observe, inputs) will reverse the order to observe(inputs, module).

Fixing it by using an explicit lambda.

Reviewed By: wushirong

Differential Revision: D36638291

fbshipit-source-id: 64abc0952802d5b438e1013c5ff91a57442900d9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: evaluators Issues re: Specific op evaluators component: lowering Issues re: The lowering / preprocessing passes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Start to handle branching in simple cases
1 participant