Skip to content

Commit 453cd2d

Browse files
badenhserge-sans-paille
authored and
serge-sans-paille
committed
Update ShapeInference.md
Variety of editorial and typographic and formatting tweaks.
1 parent d192a4a commit 453cd2d

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

mlir/docs/ShapeInference.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ constraints/bounds in the system for that operation (e.g., the output of a
1010
valuable constraints that could be captured even without full knowledge of the
1111
shape.
1212

13-
Type inference is currently modelled executionally for op creation using the
13+
Type inference is currently modelled executionally for operation creation using the
1414
[`InferTypeOpInterface`][InferTypeOpInterface], while
1515
`InferShapedTypeOpInterface` is used to implement the shape and element type
1616
inference. The return type can often be deduced from the deduced return shape
@@ -27,7 +27,7 @@ Initially the shape inference will be declaratively specified using:
2727
* Constraints on the operands of an operation directly. For example
2828
constraining the input type to be tensor/vector elements or that the
2929
elemental type be of a specific type (e.g., output of computing the size
30-
of a value is of elemental type `i1`) or class (e.g., float like).
30+
of a value is of elemental type `i1`) or class (e.g., float-like).
3131
* Constraints across operands and results of an operation.
3232

3333
- For example, specifying equality constraints on type/constituents of a
@@ -41,7 +41,7 @@ exceptional case.
4141
## Testing
4242

4343
Shape inference is currently tested alongside type inference by
44-
`TestReturnTypeDriver` in the test dialect. The driver performs two checks:
44+
`TestReturnTypeDriver` in the test dialect. This driver performs two checks:
4545

4646
1. Verification that the return types specified matches the infered types. This
4747
explicit check will be removed and made part of Op verification instead.
@@ -63,7 +63,7 @@ This will focus on the shape functions (e.g., determine the rank and dimensions
6363
of the output shape). As shown in the shaped container type, shape will be one
6464
of 3 components, the others being elemental type and attribute (which is
6565
currently left open with the intention of supporting extensions such as layouts
66-
or bounded shapes). This allows for decoupling of these:
66+
or bounded shapes at a later point). This allows for decoupling of these:
6767

6868
* Not all the information is needed for all analysis;
6969
* Not all shape functions need to provide all the information (e.g., one could
@@ -73,16 +73,16 @@ or bounded shapes). This allows for decoupling of these:
7373
representation of an operation;
7474

7575
An argument could be made that these are metadata function instead of shape
76-
functions, with some considering shape and elemental type different and some as
76+
functions, with some considering shape and elemental types different and some considering them both as
7777
part of shape. But `shape function` is IMHO descriptive and metadata can span
7878
too large a range of potential uses/values.
7979

8080
### Requirements
8181

82-
The requirements for the shape inference functions are shaped by the
82+
The requirements for the shape inference functions are determined by the
8383
requirements of shape inference, but we believe the requirements below still
84-
allow freedom to consider different shape inference approaches and so we don't
85-
constrain to a particular shape inference approach here.
84+
allow freedom to consider different shape inference approaches and so we do not
85+
impose a particular shape inference approach here.
8686

8787
#### Shape inference functions
8888

@@ -104,8 +104,8 @@ constrain to a particular shape inference approach here.
104104
guaranteed to pass.
105105
* Ideally all would eventually (see section
106106
[Inlining shape checking](#inline)) be elided.
107-
* Only report error guaranteed to occur at runtime, if an error is only
108-
possible rather use runtime assertion to fail and produce an error
107+
* Only reporting errors which are guaranteed to occur at runtime. If an error is only
108+
possible (rather than guaranteed) then we use a runtime assertion to fail and produce an error
109109
message with the invariant violated.
110110

111111
* Shape functions usable by compiler and runtime.
@@ -130,7 +130,7 @@ constrain to a particular shape inference approach here.
130130

131131
* Shape inference functions are expressible at runtime
132132

133-
* User can define a shape function for a new op dynamically at runtime,
133+
* User can define a shape function for a new operation dynamically at runtime,
134134
this allows for vendors to describe an operation and shape function
135135
dynamically.
136136

@@ -140,10 +140,10 @@ constrain to a particular shape inference approach here.
140140
information)
141141

142142
* Shape functions should be cheap to invoke on each kernel launch.
143-
* Shape function dictated by arguments (operands, attributes and regions)
143+
* Shape function can be dictated by arguments (operands, attributes and regions)
144144
only (e.g., same operands as the corresponding operation could be
145145
constructed & invoked with).
146-
* Shape information that need higher-level/graph information should use
146+
* Shape information that needs higher-level/graph information should use
147147
richer types (e.g., `TensorList<F32>`);
148148
* The function should be invocable before/while constructing an op (e.g.,
149149
can't rely on the op being constructed).
@@ -157,19 +157,19 @@ constrain to a particular shape inference approach here.
157157
determining the shape & then post to be able to actually consume the
158158
output of the file).
159159

160-
* The shape function op dialect should interop with non shape dialect ops.
160+
* The shape function operation dialect should be interoperable with non-shape function dialect operations.
161161

162-
* There may be a common set of ops that satisfy most uses (e.g., merge,
162+
* There may be a common set of operations that satisfy most uses (e.g., merge,
163163
equal_type, arithmetic expressions, slice, concat, pattern matching on
164164
attributes such as padding etc.) that will be discovered and could cover
165-
a large percentage of the use cases. And among these there will be some
165+
a large percentage of the use cases. Among these there will be some
166166
which carry extra semantic info that could be used for symbolic
167167
constraints (e.g., checking equality of two dimensions resulting in
168168
setting an equality constraint) and higher-order interpretation for
169169
constraint solving.
170170

171-
It is therefore beneficial to reuse operations but not required.
172-
Especially as for statically known shapes, arbitrary arithmetic
171+
It is therefore beneficial (but not required) to reuse operations,
172+
especially as for statically known shapes, arbitrary arithmetic
173173
computations could still be performed. This means that the computations
174174
performed statically may or may not be supported by an arbitrary solver,
175175
but would still be allowed.
@@ -239,7 +239,7 @@ operations).
239239

240240
### Possibly Asked Questions
241241

242-
#### What about ODS specifications of ops?
242+
#### What about ODS specifications of operations?
243243

244244
In ODS we have been recording the constraints for the operands & attributes of
245245
an operation. Where these are sufficient to constrain the output shape (e.g.,
@@ -251,7 +251,7 @@ serialization approach).
251251
#### Why not extract the shape function from reference implementation?
252252

253253
This could be done in future! The extracted shape function would use the shape
254-
inference dialect, so we are starting there. Especially for ops described in a
254+
inference dialect, so we are starting there. Especially for operations described in a
255255
structured way, one could autogenerate the shape function.
256256

257257
#### How/in what language will the shape functions be authored?

0 commit comments

Comments
 (0)