Skip to content

Commit d192a4a

Browse files
badenhserge-sans-paille
authored and
serge-sans-paille
committed
Update Quantization.md
Various typographic, grammatical and formatting edits and tidy ups.
1 parent 271f964 commit d192a4a

File tree

1 file changed

+53
-53
lines changed

1 file changed

+53
-53
lines changed

mlir/docs/Quantization.md

+53-53
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ taken on the topic, and is not a general reference.
1818

1919
The primary quantization mechanism supported by MLIR is a scheme which can
2020
express fixed point and affine transformations via uniformly spaced point on the
21-
Real number line.
21+
[Real](https://en.wikipedia.org/wiki/Real_number) number line.
2222

2323
Further, the scheme can be applied:
2424

@@ -30,11 +30,11 @@ Further, the scheme can be applied:
3030

3131
[Fixed point](https://en.wikipedia.org/wiki/Fixed-point_arithmetic) values are a
3232
[Real](https://en.wikipedia.org/wiki/Real_number) number divided by a *scale*.
33-
We will call the result of the divided Real the *scaled value*.
33+
We will call the result of the divided real the *scaled value*.
3434

3535
$$ real\_value = scaled\_value * scale $$
3636

37-
The scale can be interpreted as the distance, in Real units, between neighboring
37+
The scale can be interpreted as the distance, in real units, between neighboring
3838
scaled values. For example, if the scale is $$ \pi $$, then fixed point values
3939
with this scale can only represent multiples of $$ \pi $$, and nothing in
4040
between. The maximum rounding error to convert an arbitrary Real to a fixed
@@ -43,10 +43,10 @@ previous example, when $$ scale = \pi $$, the maximum rounding error will be $$
4343
\frac{\pi}{2} $$.
4444

4545
Multiplication can be performed on scaled values with different scales, using
46-
the same algorithm as multiplication of Real values (note that product scaled
46+
the same algorithm as multiplication of real values (note that product scaled
4747
value has $$ scale_{product} = scale_{left \mbox{ } operand} * scale_{right
48-
\mbox{ } operand} $$). Addition can be performed on scaled values, as long as
49-
they have the same scale, using the same algorithm as addition of Real values.
48+
\mbox{ } operand} $$). Addition can be performed on scaled values, so long as
49+
they have the same scale, using the same algorithm for addition of real values.
5050
This makes it convenient to represent scaled values on a computer as signed
5151
integers, and perform arithmetic on those signed integers, because the results
5252
will be correct scaled values.
@@ -55,31 +55,31 @@ will be correct scaled values.
5555

5656
Mathematically speaking, affine values are the result of
5757
[adding a Real-valued *zero point*, to a scaled value](https://en.wikipedia.org/wiki/Affine_transformation#Representation).
58-
Or equivalently, subtracting a zero point from an affine value results in a
58+
Alternatively (and equivalently), subtracting a zero point from an affine value results in a
5959
scaled value:
6060

6161
$$ real\_value = scaled\_value * scale = (affine\_value - zero\_point) * scale $$
6262

63-
Essentially, affine values are a shifting of the scaled values by some constant
63+
Essentially, affine values are a shift of the scaled values by some constant
6464
amount. Arithmetic (i.e., addition, subtraction, multiplication, division)
65-
cannot, in general, be directly performed on affine values; you must first
66-
[convert](#affine-to-fixed-point) them to the equivalent scaled values.
65+
cannot, in general, be directly performed on affine values; they must first be
66+
[converted](#affine-to-fixed-point) to the equivalent scaled values.
6767

6868
As alluded to above, the motivation for using affine values is to more
69-
efficiently represent the Real values that will actually be encountered during
70-
computation. Frequently, the Real values that will be encountered are not
71-
symmetric around the Real zero. We also make the assumption that the Real zero
69+
efficiently represent real values that will actually be encountered during
70+
computation. Frequently, real values that will be encountered are not
71+
symmetric around the real zero. We also make the assumption that the real zero
7272
is encountered during computation, and should thus be represented.
7373

74-
In this case, it's inefficient to store scaled values represented by signed
75-
integers, as some of the signed integers will never be used. The bit patterns
74+
In this case, it is inefficient to store scaled values represented by signed
75+
integers, as some of the signed integers will never be used. In effect, the bit patterns
7676
corresponding to those signed integers are going to waste.
7777

78-
In order to exactly represent the Real zero with an integral-valued affine
78+
In order to exactly represent the real zero with an integral-valued affine
7979
value, the zero point must be an integer between the minimum and maximum affine
8080
value (inclusive). For example, given an affine value represented by an 8 bit
8181
unsigned integer, we have: $$ 0 \leq zero\_point \leq 255$$. This is important,
82-
because in deep neural networks' convolution-like operations, we frequently
82+
because in convolution-like operations of deep neural networks, we frequently
8383
need to zero-pad inputs and outputs, so zero must be exactly representable, or
8484
the result will be biased.
8585

@@ -99,14 +99,14 @@ scope of this document, and it is safe to assume unless otherwise stated that
9999
rounding should be according to the IEEE754 default of RNE (where hardware
100100
permits).
101101

102-
### Converting between Real and fixed point or affine
102+
### Converting between real and fixed point or affine
103103

104-
To convert a Real value to a fixed point value, you must know the scale. To
105-
convert a Real value to an affine value, you must know the scale and zero point.
104+
To convert a real value to a fixed point value, we must know the scale. To
105+
convert a real value to an affine value, we must know the scale and the zero point.
106106

107107
#### Real to affine
108108

109-
To convert an input tensor of Real-valued elements (usually represented by a
109+
To convert an input tensor of real-valued elements (usually represented by a
110110
floating point format, frequently
111111
[Single precision](https://en.wikipedia.org/wiki/Single-precision_floating-point_format))
112112
to a tensor of affine elements represented by an integral type (e.g. 8-bit
@@ -121,16 +121,16 @@ af&fine\_value_{uint8 \, or \, uint16} \\
121121
$$
122122

123123
In the above, we assume that $$real\_value$$ is a Single, $$scale$$ is a Single,
124-
$$roundToNearestInteger$$ returns a signed 32 bit integer, and $$zero\_point$$
125-
is an unsigned 8 or 16 bit integer. Note that bit depth and number of fixed
124+
$$roundToNearestInteger$$ returns a signed 32-bit integer, and $$zero\_point$$
125+
is an unsigned 8-bit or 16-bit integer. Note that bit depth and number of fixed
126126
point values are indicative of common types on typical hardware but is not
127127
constrained to particular bit depths or a requirement that the entire range of
128128
an N-bit integer is used.
129129

130-
#### Affine to Real
130+
#### Affine to real
131131

132132
To convert an output tensor of affine elements represented by uint8
133-
or uint16 to a tensor of Real-valued elements (usually represented with a
133+
or uint16 to a tensor of real-valued elements (usually represented with a
134134
floating point format, frequently Single precision), the following conversion
135135
can be performed:
136136

@@ -186,10 +186,10 @@ MLIR:
186186

187187
* The TFLite op-set natively supports uniform-quantized variants.
188188
* Passes and tools exist to convert directly from the *TensorFlow* dialect
189-
to the TFLite quantized op-set.
189+
to the TFLite quantized operation set.
190190

191191
* [*FxpMath* dialect](#fxpmath-dialect) containing (experimental) generalized
192-
representations of fixed-point math ops and conversions:
192+
representations of fixed-point math operations and conversions:
193193

194194
* [Real math ops](#real-math-ops) representing common combinations of
195195
arithmetic operations that closely match corresponding fixed-point math
@@ -198,16 +198,16 @@ MLIR:
198198
* [Fixed-point math ops](#fixed-point-math-ops) that for carrying out
199199
computations on integers, as are typically needed by uniform
200200
quantization schemes.
201-
* Passes to lower from real math ops to fixed-point math ops.
201+
* Passes to lower from real math operations to fixed-point math operations.
202202

203203
* [Solver tools](#solver-tools) which can (experimentally and generically
204204
operate on computations expressed in the *FxpMath* dialect in order to
205205
convert from floating point types to appropriate *QuantizedTypes*, allowing
206-
the computation to be further lowered to integral math ops.
206+
the computation to be further lowered to integral math operations.
207207

208-
Not every application of quantization will use all facilities. Specifically, the
208+
Not every application of quantization will use all of these facilities. Specifically, the
209209
TensorFlow to TensorFlow Lite conversion uses the QuantizedTypes but has its own
210-
ops for type conversion and expression of the backing math.
210+
operations for type conversion and expression of the supporting math.
211211

212212
## Quantization Dialect
213213

@@ -218,20 +218,20 @@ TODO : Flesh this section out.
218218
* QuantizedType base class
219219
* UniformQuantizedType
220220

221-
### Quantized type conversion ops
221+
### Quantized type conversion operations
222222

223223
* qcast : Convert from an expressed type to QuantizedType
224224
* dcast : Convert from a QuantizedType to its expressed type
225225
* scast : Convert between a QuantizedType and its storage type
226226

227-
### Instrumentation and constraint ops
227+
### Instrumentation and constraint operations
228228

229229
* const_fake_quant : Emulates the logic of the historic TensorFlow
230-
fake_quant_with_min_max_args op.
230+
fake_quant_with_min_max_args operation.
231231
* stats_ref : Declares that statistics should be gathered at this point with a
232232
unique key and made available to future passes of the solver.
233233
* stats : Declares inline statistics (per layer and per axis) for the point in
234-
the computation. stats_ref ops are generally converted to stats ops once
234+
the computation. stats_ref ops are generally converted to statistical operations once
235235
trial runs have been performed.
236236
* coupled_ref : Declares points in the computation to be coupled from a type
237237
inference perspective based on a unique key.
@@ -246,23 +246,23 @@ As originally implemented, TensorFlow Lite was the primary user of such
246246
operations at inference time. When quantized inference was enabled, if every
247247
eligible tensor passed through an appropriate fake_quant node (the rules of
248248
which tensors can have fake_quant applied are somewhat involved), then
249-
TensorFlow Lite would use the attributes of the fake_quant ops to make a
250-
judgment about how to convert to use kernels from its quantized ops subset.
249+
TensorFlow Lite would use the attributes of the fake_quant operations to make a
250+
judgment about how to convert to use kernels from its quantized operations subset.
251251

252-
In MLIR-based quantization, fake_quant_\* ops are handled by converting them to
252+
In MLIR-based quantization, fake_quant_\* operationss are handled by converting them to
253253
a sequence of *qcast* (quantize) followed by *dcast* (dequantize) with an
254254
appropriate *UniformQuantizedType* as the target of the qcast operation.
255255

256256
This allows subsequent compiler passes to preserve the knowledge that
257-
quantization was simulated in a certain way while giving the compiler
257+
quantization was simulated in a certain way, while giving the compiler
258258
flexibility to move the casts as it simplifies the computation and converts it
259259
to a form based on integral arithmetic.
260260

261261
This scheme also naturally allows computations that are *partially quantized*
262-
where the parts which could not be reduced to integral ops are still carried out
262+
where the parts which could not be reduced to integral operationss are still carried out
263263
in floating point with appropriate conversions at the boundaries.
264264

265-
## TFLite Native Quantization
265+
## TFLite native quantization
266266

267267
TODO : Flesh this out
268268

@@ -280,16 +280,16 @@ TODO : Flesh this out
280280
-> tfl.Q) and replaces with (op). Also replace (constant_float -> tfl.Q)
281281
with (constant_quant).
282282

283-
## FxpMath Dialect
283+
## FxpMath dialect
284284

285-
### Real math ops
285+
### Real math operations
286286

287287
Note that these all support explicit clamps, which allows for simple fusions and
288288
representation of some common sequences quantization-compatible math. Of
289289
addition, some support explicit biases, which are often represented as separate
290290
adds in source dialects.
291291

292-
TODO: This op set is still evolving and needs to be completed.
292+
TODO: This operation set is still evolving and needs to be completed.
293293

294294
* RealBinaryOp
295295
* RealAddEwOp
@@ -312,9 +312,9 @@ TODO: This op set is still evolving and needs to be completed.
312312
* CMPLZ
313313
* CMPGZ
314314

315-
### Fixed-point math ops
315+
### Fixed-point math operationss
316316

317-
TODO: This op set only has enough ops to lower a simple power-of-two
317+
TODO: This operation set only has enough operations to lower a simple power-of-two
318318
RealAddEwOp.
319319

320320
* RoundingDivideByPotFxpOp
@@ -331,26 +331,26 @@ adjacent areas such as solving for transformations to other kinds of lower
331331
precision types (i.e. bfloat16 or fp16).
332332

333333
Solver tools are expected to operate in several modes, depending on the
334-
computation and the manner in which it was trained:
334+
computation and the training characteristics of the model:
335335

336336
* *Transform* : With all available information in the MLIR computation, infer
337337
boundaries where the computation can be carried out with integral math and
338338
change types accordingly to appropriate QuantizedTypes:
339339

340340
* For passthrough ops which do not perform active math, change them to
341341
operate directly on the storage type, converting in and out at the edges
342-
via scast ops.
343-
* For ops that have the *Quantizable* trait, the type can be set directly.
344-
This includes ops from the [real math ops set]{#real-math-ops}.
345-
* For others, encase them in appropriate dcast/qcast ops, presuming that
342+
via scast operations.
343+
* For operations that have the *Quantizable* trait, the type can be set directly.
344+
This includes operations from the [real math ops set]{#real-math-ops}.
345+
* For others, encase them in appropriate dcast/qcast operations, presuming that
346346
some follow-on pass will know what to do with them.
347347

348348
* *Instrument* : Most of the time, there are not sufficient implied
349349
constraints within a computation to perform many transformations. For this
350-
reason, the solver can insert instrumentation ops at points where additional
350+
reason, the solver can insert instrumentation operations at points where additional
351351
runtime statistics may yield solutions. It is expected that such
352352
computations will be lowered as-is for execution, run over an appropriate
353-
eval set, and statistics at each instrumentation point made available for a
353+
evaluation set, and statistics at each instrumentation point made available for a
354354
future invocation of the solver.
355355

356356
* *Simplify* : A variety of passes and simplifications are applied once

0 commit comments

Comments
 (0)