Use the PyTorch BatchNorm folding instead of a handwritten converter when possible #31
Labels
component: lowering
Issues re: The lowering / preprocessing passes
feature request
New feature or request
good first issue
Good for newcomers
help wanted
Extra attention is needed
priority: high
Milestone
We currently use a handwritten converter for batch norm which fuses the operations into a single convolution. It would be easier to use the
_jit_pass_fold_convbn
pass in lowering instead where we can.The text was updated successfully, but these errors were encountered: