• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Articles
  • News
  • Events
  • Advertize
  • Jobs
  • Courses
  • Contact
  • (0)
  • LoginRegister
    • Facebook
    • LinkedIn
    • RSS
      Articles
      News
      Events
      Job Posts
    • Twitter
Datafloq

Datafloq

Data and Technology Insights

  • Categories
    • Big Data
    • Blockchain
    • Cloud
    • Internet Of Things
    • Metaverse
    • Robotics
    • Cybersecurity
    • Startups
    • Strategy
    • Technical
  • Big Data
  • Blockchain
  • Cloud
  • Metaverse
  • Internet Of Things
  • Robotics
  • Cybersecurity
  • Startups
  • Strategy
  • Technical

Lower Numerical Precision Deep Learning Inference and Training

Datafloq Sponsored / 32 min read.
March 21, 2018
Datafloq AI Score
×

Datafloq AI Score: 89.67

Datafloq enables anyone to contribute articles, but we value high-quality content. This means that we do not accept SEO link building content, spammy articles, clickbait, articles written by bots and especially not misinformation. Therefore, we have developed an AI, built using multiple built open-source and proprietary tools to instantly define whether an article is written by a human or a bot and determine the level of bias, objectivity, whether it is fact-based or not, sentiment and overall quality.

Articles published on Datafloq need to have a minimum AI score of 60% and we provide this graph to give more detailed information on how we rate this article. Please note that this is a work in progress and if you have any suggestions, feel free to contact us.

floq.to/DUUh1

This advertorial is sponsored by Intel®

Introduction

Most commercial deep learning applications today use 32-bits of floating point precision (Æ’p32) for training and inference workloads. Various researchers have demonstrated that both deep learning training and inference can be performed with lower numerical precision, using 16-bit multipliers for training and 8-bit multipliers or fewer for inference with minimal to no loss in accuracy (higher precision “ 16-bits vs. 8-bits “ is usually needed during training to accurately represent the gradients during the backpropagation phase). Using these lower numerical precisions (training with 16-bit multipliers accumulated to 32-bits or more and inference with 8-bit multipliers accumulated to 32-bits) will likely become the standard over the next year, in particular for convolutional neural networks (CNNs).

There are two main benefits of lower precision. First, many operations are memory bandwidth bound, and reducing precision would allow for better usage of cache and reduction of bandwidth bottlenecks. Thus, data can be moved faster through the memory hierarchy to maximize compute resources. Second, the hardware may enable higher operations per second (OPS) at lower precision as these multipliers require less silicon area and power.

In this article, we review the history of lower numerical precision training and inference and describe how Intel® is enabling lower numerical precision for deep learning inference and training. Specifically, we describe instructions available in the current Intel® Xeon® Scalable processors and instructions that will be available in some future processors. We describe how to quantize the model weights and activations and the lower numerical functions available in the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN). Finally, we describe how deep learning frameworks take advantage of these lower numerical precision functions and reduce the conversion overhead between different numerical precisions. Each section can be read independently of other sections and the reader may skip to their section of interest. A detailed exposition including commercial examples of deep learning training with the Intel Xeon Scalable processors is presented elsewhere.

Brief History of Lower Precision in Deep Learning

Researchers have demonstrated deep learning training with 16-bit multipliers and inference with 8-bit multipliers or less of numerical precision accumulated to higher precision with minimal to no loss in accuracy across various models. Vanhoucke, et al. (2011) quantized activations and weights to 8-bits and kept the biases and first layer input at full precision (fp32) for the task of speech recognition on CPUs. Hwang, et al. (2014) trained a simple network with quantized weights of -1, 0 and 1 in the feed forward propagation and updated the high precision weights in the back propagation using the MNIST* and TIMIT* datasets with negligible performance loss. Courbariaux, et al. (2015) used the MNIST, CIFAR-10*, and SVHN* datasets to train with lower numerical precision multipliers and high precision accumulators and updated the high precision weights. They proposed combining dynamic fixed point (having one shared exponent for a tensor or high dimensional array) with Gupta, et al.’s (2015) stochastic rounding as future work. This became the core piece of Koster, et al.’s (2017) use of the Flexpoint numerical format in the Intel® Nervanaâ„¢ Neural Network Processors. Kim and Smaragdis (2016) trained with binary weights and updated on full precision weights with competitive performance on the MNIST dataset. Miyashita, et al. (2016) encoded the weights and activations in a base-2 logarithmic representation (since weights/activations have a non-uniform distribution).

They trained CIFAR-10 with 5-bits weights and 4-bit activations resulting in minimal performance degradation. Rastegari, et al. (2016) trained AlexNet with binary weights (except for the first and last layers) and updated on full precision weights with a top-1 2.9% accuracy loss. Based on their experiments, they recommend avoiding binarization in fully connected layers and convolutional layers with small channels or filter sizes (e.g., 1×1 kernels). Mellempudi, et al. (2017) from Intel Labs trained ResNet-101 with 4-bit weights and 8-bit activations in convolutional layers while doing updates in full precision with a top-1 2% accuracy loss. Micikevicius, et al. (2017) trained with 16-bit floating-point (fp16) multipliers and full precision accumulators and updated the full precision weights with negligible to no loss in accuracy for AlexNet*, VGG-D*, GoogLeNet*, ResNet-50*, Faster R-CNN*, Multibox SSD*, DeepSpeech2*, Sequence-to-Sequence*, bigLSTM*, and DCGAN* (some models required gradient scaling to match full precision results). Baidu researchers (2017) successfully used 8-bits of fixed precision with 1 sign bit, 4-bits for the integer part and 3-bits for the fractional part. Sze, et al. (2017) used various quantization techniques (see Table 3 in their paper) showing minimal to no loss at reduced precision (except for the first and last layers which were kept at full precision). An anonymous submission to ICLR 2018 details how to generate state-of-the-art on ResNet-50, GoogLeNet, VGG-16, and AlexNet using 16-bits integer multipliers and 32-bit accumulators.

Lower Numerical Precision With Intel® Xeon® Scalable Processors

The Intel Xeon Scalable processor now includes the Intel® Advance Vector Extension 512 (Intel® AVX-512) instruction set which have the 512-bit wide Fused Multiply Add (FMA) core instructions. These instructions enable lower numerical precision multiplies with higher precision accumulates. Multiplying two 8-bit values and accumulating the result to 32-bits requires 3 instructions and requires one of the 8-bit vectors to be in unsigned int8 (u8) format, the other in signed int8 (s8) format with the accumulation in signed int32 (s32) format. This allows for 4x more input at the cost of 3x more instructions or 33.33% more compute with 1/4 the memory requirement. The reduced memory and higher frequency for lower numerical precision operations makes it even faster.See Figure 1 for details.1

map type image
Figure 1. The Intel Xeon Scalable processor core enables 8-bit multiplies with 32-bit accumulates with 3 instructions: VPMADDUBSW υ8×s8→s16 multiples, VPMADDWD broadcast1 s16→s32, and VPADDD s32→s32 adds the result to accumulator. This allows for 4x more input over fp32 at the cost of 3x more instructions or 33.33% more compute and ¼ the memory requirement.The reduced memory and higher frequency available with lower numerical precision makes it even faster. Image credit to Israel Hirsh.

The Intel AVX-512 instructions also enable 16-bit multiplies. Multiplying two 16-bit values and accumulating the result to 32-bits requires 2 instructions (2 cycles) with both 16-bit vectors to be in signed int16 (s16) format and the accumulation in signedint32 (s32) format. This allows for 2x more input at the cost of 2x more instructions, resulting in no additional compute. It does, however, reduce the memory requirement and bandwidth bottlenecks, both of which may improve the overall performance. See Figure 2 for details.


Figure 2. The Intel Xeon Scalable processor core is capable of 16-bit multiplies with 32-bit accumulates with 2 instructions: VPMADDWD s16×s16→s32 multiples, and VPADDD s32→s32 adds the result to accumulator. This allows for 2x more input over fp32 at the cost of 2x more instructions or no more compute and ½ the memory requirement. Image credit to Israel Hirsh.

Intel developed the AVX512_VNNI (Vector Neural Network Instruction), a new set of Intel AVX-512 instructions to boost DL performance. Ice Lake and other future microarchitectures (see Table 1-1) will have the AVX512_VNNI instructions. AVX512_VNNI includes 1) an FMA instruction for 8-bit multiplies with 32-bits accumulates υ8×s8→s32 as shown in Figure 3, and 2) an FMA instruction for 16-bit multiplies with 32-bit accumulates s16×s16→s32 as shown in Figure 4. The theoretical peak compute gains are 4x int8 OPS and 2x int16 OPS over ƒp32 OPS, respectively. Practically, the gains may be lower due to memory bandwidth bottlenecks.

An image
Figure 3. AVX512_VNNI enables 8-bit multiplies with 32-bit accumulates with 1 instruction. The VPMADDUBSW, VPMADDWD, VPADDD instructions in Figure 1 are fused into the VPDPBUSD instruction υ8×s8→s32. This allows for 4x more inputs over ƒp32 and (theoretical peak) 4x more compute with ¼ the memory requirements. Image credit to Israel Hirsh.

Image
Figure 4. AVX512_VNNI enables 16-bit multiplies with 32-bit accumulates with 1 instruction. The VPMADDWD, VPADDD instructions in Figure 2 are fused into the VPDPWSSD instruction s16×s16→s32. This allows for 2x more inputs over ƒp32 and (theoretical peak) 2x more compute with ½ the memory requirements. Image credit to Israel Hirsh.

A potential issue is the undefined behavior on overflows that may occur when using the VPMADDUBSW instruction υ8×s8→s16 (see Figure 1). This is a problem when both υ8 and s8 values are near their maximum values.2 This can be mitigated by reducing the precision of the inputs by 1-bit. This is not in practice an issue when using the AVX512_VNNI VPDPBUSD FMA instruction υ8×s8→s32.

An overflow is more likely to occur with the AVX512_VNNI VPDPWSSD FMA instruction s16×s16→s32. This can be similarly mitigated by reducing the precision of the activations and the weights by 1 or 2 bits. Another technique to prevent overflow is to use a second accumulator at ƒp32, and convert to ƒp32 and use that accumulator after a set number of s32 accumulates. Preliminary results show that statistical performance does not suffer using these techniques.

Compiler support for these AVX512_VNNI instructions is underway. GCC 8 development code and LLVM/Clang 6.0 compiler already support AVX512_VNNI instructions. The X86 Encoder Decoder (XED) and the Intel software developer emulator (SDE) October 2017 update adds support for AVX512_VNNI instructions.

Intel® MKL-DNN Library Lower Numerical Precision Primitives

The Intel MKL-DNN library contains popular deep learning functions or primitives used across various models such as inner products, convolutions, rectified linear units (ReLU), and batch normalization (BN), along with functions necessary to manipulate the layout of tensors or high dimensional arrays. Intel MKL-DNN is optimized for Intel processors with Intel AVX-512, Intel® AVX-2, and Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2) instructions. These functions use ƒp32 for training and inference workloads. Recently, new functions were introduced to support inference workloads with 8-bits of precision in convolutional, ReLU, fused convolutional plus ReLU, and pooling layers. Functions for long short-term memory (LSTM), other fused operations, and Winograd convolutions with 8-bits are designated as future work. Intel MKL-DNN support for 16-bit multiplies using the AVX512_VNNI instructions is designed as future work when the instructions become available.

Currently, Intel MKL-DNN does not have a local response normalization (LRN), fully connected (FC), softmax, or batch normalization (BN) layers implemented with 8-bits of precision (only with fp32) for the following reasons. Modern models do not use LRN and older models can be modified to use batch normalization, instead. Modern CNN models do not typically have many FC layers, although adding support for FC layers is designated as future work. The softmax function currently requires full precision as it does not maintain accuracy with 8-bits of precision. A BN inference layer is not needed as it can be absorbed by its preceding layer by scaling the weight values and modifying the bias as discussed in the Enabling Lower Numerical Precision in the Frameworks section.

Intel MKL-DNN implements the 8-bit convolution operations with the activation (or input) values in Ï…8 format, the weights in s8 format and the biases in s32 format (biases can be kept in Æ’p32 as well as they take a very small percentage of the overall compute). Figure 5 shows the process of inference operations with 8-bit multipliers accumulated to s32.

Flowchart image
Figure 5. The data layer or the first convolution layer activations are quantized to Ï…8 as inputs to the next convolutional layer. The weights are quantized to s8 and the bias is formatted to s32 and added to the s32 convolution accumulate. The framework chooses the format of the convolution output as s8, Ï…8, or s32 depending on the parameters of the following layer. Image credit to Jiong Gong.

8-bit Quantization of Activations With Non-negative Values and Weights

Intel MKL-DNN currently assumes that the activations are non-negative, which is the case after the ReLU activation function. Later in this article we discuss how to quantize activations with negative values. Intel MKL-DNN quantizes the values for a given tensor or for each channel in a tensor (the choice is up to the framework developers) as follows.

R{a,w}= maxâ¡(abs(Τ{a,w} ) ), where Τ{a,w} is a tensor corresponding to either the weights w or the activations or model inputs a.

Qa = 255/Ra is the quantization factor for activations with non-negative values, and Qw = 127/Rw is the quantization factor for the weights. The quantized activation, weights, and bias are:
Mathematical formula
where the function ||⋅|| rounds to the nearest integer. Note that while the s8 format supports −128, the smallest quantized s8 weight value use is −127.

The affine transformation using 8-bit multipliers and 32-bit accumulates results in
Mathematical formula
where the approximation is because the equation ignores the rounding operation, and
Mathematical formula
is the affine transformation with f32 format, and D = 1/Qa Qw is the dequantization factor.

In quantizing to Ï…8 and s8 formats, a zero value maps to a specific value without any rounding. Given that zero is one of the most common values, it is advantageous to have exact mappings to reduce quantization errors and improve statistical accuracy.

The quantization factors above can be in fp32 format in the Intel Xeon Scalable processors. However, some architectures do not support divides (e.g., FPGAs) and use shifts. For those architectures, the scalar is rounded to the nearest power-of-two and the scaling is done with bit-shifts. The reduction in statistical accuracy is minimal (usually <1%).

Efficient 8-bit Multiplies

In Figure 6, we demonstrate how to efficiently perform the 8-bit multiplies for A × W. Intel MKL-DNN uses an N H W C data layout for the activation tensors where N is the batch size, H is the height, W is the width, and C is the number of channels, and an (O/16 )Κ (C/4)Τ16o4c data layout for the weight tensors where O is the number kernels or output channels, C is the number of input channels, Κ is the height, and Τ is the width. The first 32-bits (4 int8 values) of tensor A shown in gray are broadcasted 16 times to fill a 512-bit register. Intel MKL-DNN modifies the data layout of tensor W after quantizing the weights. Tensor W data layout is rearranged as W ² by groups of 16 columns, with each column having 32-bits (4 int8 values) to be read continuous in memory starting with the first 4 values in column 1 occupying the first 32-bits of the register (red), the next 4×1 occupying the next 32-bits of the register (orange), and so forth (green). The second, third, and fourth block (yellow) below the first block are rearranged in the same pattern. The next set of blocks (blue) follows. In practice, tensor W is usually transposed before re-arranging the memory layout in order to access 1×4 continuous memory values rather than 4×1 scatter values when rearranging the data layout. Modifying this data layout is usually done once and stored for reuse for all inference iterations.

An image
Figure 6. Efficient use of int8 multiplies to compute the product A×W requires a data layout transformation of tensor W in order to read continuous bits. Groups of 32-bits of A are broadcasted 16 times to fill a 512-bit register which are multiplied by groups of 512-bits from tensor W ².

The register with the first 4 int8 values (copied 16 times) of A is multiplied by the 64 int8 values (512-bits) of W ² and accumulated. The next 4 values in A are broadcasted 16 times to another register which is multiplied by the next 64 int8 values of W ². This continues until the first row of A is read and the results are accumulated. The outputs (after all 3 instructions of the 8-bit FMA) are the first 16 output values (requiring 512-bits at s32). The first row of A is then multiplied by the next values of W ² resulting in the next 16 values of the output.

The Intel Xeon Scalable processors have up to 32 registers. When executing in 512-bit register port scheme on processors with two FMA units,3 Port 0 FMA has a latency of 4 cycles and Port 5 FMA has a latency of 6 cycles. The instructions used for deep learning workloads at int8 support bypass and have a latency of 5 cycles for both ports 0 and 5 (see Section 15.17). In practice, multiple rows of W ² are loaded to multiple registers to hide these latencies.

16-bit Functions for Training

Intel MKL-DNN support for 16-bit multiplies using the AVX512_VNNI instructions is designed as future work when the instructions become available. Nevertheless, researchers have already shown training of various CNNs models using 16-bit multiplies with 32-bit accumulates by taking advantage of the AVX512_4VNNI instructions (also known as QVNNI, available on some Intel® Xeon® Phi™ processors), specifically the AVX512_4VNNI VP4DPWSSD instruction (similar to the AVX512_VNNI VPDPWSSD instruction discussed earlier).

These researchers matched the fp32 statistical performance of ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet with the same number of iterations as Æ’p32 models without changing the hyper-parameters. They use s16 to store the activations, weights, and gradients, and also keep a master-copy of the Æ’p32 weights for the weights updates that gets quantized back to s16 after each iteration. They use quantization factors that are powers-of-two which facilitates managing the quantization / dequantization factors through tensor multiplies.

Enabling Lower Numerical Precision in the Frameworks

The popular frameworks enable users to define their model without writing all the function definitions themselves. The details on the implementations of the various functions can be hidden from the framework users. These implementations are done by framework developers. This section explains the modifications required at the framework level to enable lower numerical precision

Quantizing the weights is done before inference starts. Quantizing the activations efficiently requires precomputing the quantization factors. The activation quantization factor are precomputed usually sampling the validation dataset to find the range as described above. Values in the test dataset outside this range are saturated to the range. For negative activation values, the range before saturation could be relaxed to −128Ra ²/127 in order to use the s8 = −128 value, where Ra ² is maximum absolute value of these activations. These scalars are then written to a file.

8-bit Quantization of Activations or Inputs With Negative Values

Quantizing activations or input values with negative values can be implemented at the framework level as follows. Qa ² = 127/Ra ² is the quantization factor for activations with negative values. The s8 quantized format is as8 = ||Qa ² aÆ’32 || ∈ [-128, 127 ], where the function ||â‹…|| rounds to the nearest integer. However, the activation must be in Ï…8 format to take advantage of the AVX512_VNNI VPMADDUBSW instruction or the AVX512_VNNI VPDPBUSD instruction (both described in the section Lower numerical precision with Intel Xeon Scalable processors). Therefore, all values in as8 are shifted by Κ = 128 to be non-negative:
Mathematical formula

where 1 is a vector of all 1s, and the bias bÆ’32 is modify as

Mathematical formula

The methodology to quantize the weights and modified bias is the same as before:Mathematical formula

The affine transformation using 8-bit multipliers and 32-bit accumulates results in
Mathematical formula
where

Mathematical formula

where D = 1/Qa ² Qw is the dequantization factor.

When the input signal is already in Ï…8 format (e.g., RGB images) but a preprocessing step is required to subtract the mean signal, the above equations can be used where K is the mean, aÏ…8 is the input signal (not pre-processed), and Qa ² = 1.

Researchers often keep the first convolution layer in Æ’p32 format and do the other convolutional layers in int8 (see Brief History of Lower Precision in Deep Learning section for examples). We observe that using these quantization techniques enables the use of all convolution layers in int8 with no significant decrease in statistical accuracy.


Interested in what the future will bring? Download our 2023 Technology Trends eBook for free.

Consent

To recap, to use activations with negative values, the activations are quantized to s8 format and then shifted by K=128 to Ï…8 format. The only additional change is to modify the bias: Mathematical formula For a convolution layer the product WÆ’32 1 is generalized to equal the sum over all the values of WÆ’32 along all dimensions except the dimension shared with bÆ’32. See Appendix A for details.

Fused Quantization

Fused quantization improves performance by combining dequantization and quantization as follows so there is no need to convert to fp32. The activation at layer l+1 is:

Mathematical formula

where g(â‹…) is a non-linear activation function. Assuming the ReLU activation function, the activation can be expressed in Ï…8 format as
Mathematical formula
where the product Qa(l+1) D(l) enables computing the next layer’s quantized activation in Ï…8 format without computing the Æ’p32 representation.

When g(⋅) is the ReLU function (as in the equations below) and Q ≥ 0, the following property holds:
Mathematical formula
This property is useful for models with skip connections such as ResNet where a skip connection branch may have dependencies on various activations. As an example, and using the nomenclature by the ResNet-50 author in Caffe’s deploy.prototxt (see Figure 7), the quantized input activation in layer res2b_branch2a (abbreviated as 2b2a in the equations below) is

Mathematical formula

where aυ8(2b2a) ∈ [0,127 ] (instead of [0,255 ] ) because Qa(2b2a) D(2a1) xs32(2a1) ∈ [ −128, 127 ] is in s8 format because the product comes before the ReLU function and Qa(2b2a) = 127/Ra(2b2a) is the quantization factor. Following this procedure, it is shown in Appendix B that the activation a(2c2a)υ8 depends on xs32(2a1), xs32(2a2c) and xs32(2b2c). Similarly, the activation aυ8(3ca) depends on xs32(2a1), xs32(2a2c), xs32(2b2c) and xs32(2c2c).

An image
Figure 7. Diagram of the second group of residual blocks in ResNet-50 (and the first branch in the third group) using the nomenclature by the ResNet-50 author in Caffe’s deploy.prototxt. The layers marked with a blue arrow have dependencies on 2 or more activations. Image credit to Barukh Ziv, Etay Meiri, Eden Segal.

Batch Normalization

A batch normalization (BN) inference layer is not needed as it can be absorbed by its preceding layer by scaling the weight values and modifying the bias. This technique only works for inference and is not unique to lower numerical precision. It can be implemented at the framework level instead of Intel MKL-DNN. BN is usually applied after the affine transformation x = W a + b and before the activation function (details in the original BN paper). BN normalizes x to be zero mean and unit norm, and then scales and shifts the normalized vector by γ and β, respectively, which are parameters also learned during training. During a training iteration, x is normalized using the mini-batch statistics. For inference, the mean E and variance V of x are precomputed using the statistics of the entire training dataset or a variant such as a running average of these statistics computed during training. During inference, the BN output y is:
Mathematical formula
where W ² = γ/V W and b ² = γ/V b + (β − γE /V)1. That is, during inference the BN layer can be replaced by adjusting weights and bias in the preceding convolutional or fully connected layer.

Frameworks

Intel enabled 8-bit inference in Intel® Distribution of Caffe*. Intel’s DL Inference Engine, Apache* MXNet*, and TensorFlow* optimizations are expected to be available in Q2 2018. All these 8-bit optimizations are currently limited to CNN models. RNN models, and other frameworks will follow later in 2018.

In the Intel Distribution of Caffe, the model.prototxt file is modified to include the precomputed scalars as shown in Figure 8. Currently, the Intel Optimization of Caffe can provide the quantization factor as either a power-of-two or as regular Æ’p32 value, and can use either 1 quantization factor per tensor or 1 per channel. Those quantization factors are computed using a sampling tool built into the Intel Distribution of Caffe.

An image
Figure 8. Quantization factors are added to the model.prototxt file. Image credit to Haihao Shen.

Intel’s Deep Learning Inference Engine is part of Intel’s Deep Learning Deployment Toolkit and Intel® Computer Vision SDK. It’s available on Linux* and Windows* OS and initially supports models trained from the Caffe, MXNet*, and TensorFlow* frameworks. The Inference Engine facilitates deployment of DL solutions by delivering a unified API for various hardware backends: Intel Xeon processors with Intel AVX-2 and Intel AVX-512, Intel Atom® processors, Intel® HD Graphics, and Intel® Arria® 10 (Intel® A10) discrete cards at various numerical precisions depending on the hardware. The inference engine will support 8-bit inference on Intel Xeon Scalable processors starting in Q2 2018.

TensorFlow already supports 8-bit inference and various quantization methods. It can dynamically compute the scale or collect statistics during training or calibration phase to then assign a quantization factor. TensorFlow’s graph, which includes these scalars, is written to a file. The graph with the respective scalars is quantized and ran during inference. TensorFlow supports two methods for quantization. One method is similar to Intel MKL-DNN by setting the min and max as additive inverses. The other uses arbitrary values for min and max that need an offset plus scale (not supported in Intel MKL-DNN). See Pete Warden’s blog for more details but note that the blog is outdated as it does not contain all the ways to quantize in TensorFlow.

Another tool of TensorFlow is retraining or fine-tuning at lower numerical precision. Fine-tuning can improve the statistical performance. Given a model that is trained at Æ’p32, after its weights are quantized, the model is then fine-tuned with the quantized weights and the weights are re-quantized after each training iteration.

GemmLowP is a Google library adopted in TensorFlow Lite*. It uses υ8 multiplies, where ƒ32 = D × (υ8 − Κ), Κ is an υ8 value that maps to ƒp32 = 0, and D > 0 is the dequantization factor.

The Apache MXNet branch currently does not support 8-bit. However, a branch by one of the main MXNet contributors supports 8-bit inference. In that branch, there are two methods to quantize the values: one where the min value is mapped to 0 and the max value to 255 (note that zero does not map to an exact value); and, another one where the max of the absolute value is mapped to either −127 or 127 (note that zero maps to zero ”similar to Intel MKL-DNN). The main difference with the presented approached is that the scalars in this MXNet branch are not precomputed. Rather, they are computed during the actual inference steps which reduces the benefits of lower numerical precision. In that branch, the scalars for the activations are computed by multiplying the scalars from the inputs with the scalars from the weights: activation-scalar = input-scalar * weight-scalar, where input = input-scalar * quantized-input; weight = weight-scalar * quantized-weight; and activation = activation-scalar * quantized-activation; input, weights, activations, and scalars are in Æ’p32 format, quantized-input and quantized-weights are in int8 format, and quantized-activations are in int32 format (see details). While min and max of the activations are tracked, the values are only dequantized when encountering an fp32 layers (e.g., softmax).

TensorRT quantizes to s8 format similar to Intel MKL-DNN with the addition of finding a tighter range by minimizing the KL divergence between the quantized and reference distributions.

The TPU team claims that TPUs which uses int8 multiplies are being used across a variety of models including LSTM models. The software stack translates API calls from TensorFlow graphs into TPU instructions.

Caffe2’s docs state that there is “flexibility for future directions such as quantized computation,” but currently no plans for quantization have been disclosed.

PyTorch has a branch that offers various options to quantize but there is no discussion on which is better.

Microsoft introduced Project Brainwave* using a custom 8-bit floating point format (ms-fp8) that runs on Intel® Stratix® 10 FPGAs. The details of this format, quantization techniques, or framework implementation has not been disclosed. Project Brainwave* supports CNTK* and TensorFlow and plans to support many others by converting models trained in popular frameworks to an internal graph-based intermediate representation.

Model and Graph Optimizations

Model optimizations can further improve inference performance. For example, in ResNet, the stride operation can be moved to an earlier layer without modifying the end result and reducing the number of operations as shown in Figure 9. This modification applies to both 8-bit and 32-bits.

An image
Figure 9. The stride 2 shown on the layers on the left blocks can be moved to an earlier layer during inference which reduces the number of operations and does not modify the result. Courtesy of Eden Segal and Etay Meiri.

Conclusion

Lower numerical precision inference and training can improve the computational performance with minimal or no reduction in statistical accuracy. Intel is enabling 8-bit precision for inference on the current generation of Intel Xeon Scalable processors. Intel is also enabling 8-bit precision for inference and 16-bit precision for training on future microarchitectures in both hardware and software enabling compilers, the Intel MKL-DNN library and popular deep learning frameworks.

Acknowledgements

A special thanks to the framework optimization team leads, and the Intel Xeon processor architects and researchers for the useful discussions including Israel Hirsh, Alex Heinecke, Vadim Pirogov, Frank Zhang, Rinat Rappoport, Barak Hurwitz, Dipankar Das, Dheevatsa Mudigere, Naveen Mellempudi, Dhiraj Kalamkar, Bob Valentine, AG Ramesh, Nagib Hakim as well as the wonderful reviewers R. Chase Adams, Nikhil Murthy, Banu Nagasundaram, Todd Wilson, Alexis Crowell, and Emily Hudson.

About the Authors

Andres Rodriguez, PhD, is a Sr. Principal Engineer working with the Data Center Group (DCG) and Artificial Intelligence Products Group (AIPG) where he designs AI solutions for Intel’s customers and provides technical leadership across Intel for AI products. He has 13 years of experience working in AI. Andres received his PhD from Carnegie Mellon University for his research in machine learning. He holds over 20 peer reviewed publications in journals and conferences, and a book chapter on machine learning.

Eden Segal, is a software developer at the Pre-Enabling team where he optimizes Deep Learning algorithms to find the peak algorithm performance on Intel processors. This knowledge is used to improve Intel’s performance across the entire deep learning stack from the hardware, through the libraries and up to the deep learning framework.

Etay Meiri, is a software developer at the Pre-Enabling team where he optimizes Deep Learning algorithms to find the peak algorithm performance on Intel processors. This knowledge is used to improve Intel’s performance across the entire deep learning stack from the hardware, through the libraries and up to the deep learning framework.

Evarist Fomenko, MD in Applied Mathematics, is a software development engineer in Intel MKL and Intel MKL-DNN where he designs and optimizes library functions, and interacts with internal and external teams to assist with integration. He has 5 years of experience working on hardware optimizations at Intel.

Young Jin Kim, PhD, is a Sr. Machine Learning Engineer with Intel’s AI Products Group (AIPG) where he develops and optimizes deep learning software frameworks for Intel’s hardware architecture by adopting the state-of-the-art techniques. He has over 10 years of experience working in artificial intelligence. Young received his PhD from Georgia Institute of Technology for his research in deep learning and high-performance computing. He holds over 10 peer reviewed publications in journals and conferences.

Haihao Shen, MD in Computer Science, is a deep learning engineer in machine learning and translation team (MLT) with Intel Software and Services Group (SSG). He leads the development of Intel Distribution of Caffe, including low precision inference and model optimizations. He has 6 years of experience working on software optimization and verification at Intel. Prior to joining Intel, he graduated from Shanghai Jiao Tong University.

Barukh Ziv, PhD, is a Senior Software Engineer, working with pre-Enabling group in SSGi, where he designs efficient implementations of DL applications for future generations of Xeon processors. He has 2 years of experience working on DL optimizations. Barukh received his Ph. D. in Technical Sciences from Kaunas University of Technology. He holds over 5 peer reviewed publications in journals and conferences.

Appendix A: Details on Quantization of Activations or Inputs With Negative Values

To convince the reader that these same formulas (see the section 8-bit quantization of activations or inputs with negative values) generalize to convolutional layers, we use the indices of each tensor entry and work through the steps to show the convolutional output. Let Wƒ32 ∈ ℜO×C×K×T be the weight tensor with O kernels or output channels, C input channels, Κ height, and Τ width. The modified bias can be represented as:

Mathematical formula

where
Mathematical formula
and oi, ci, κi, and τi are the indices for the kernels or output channels, input channels, kernel height, and kernel width, respectively. The convolution output can be represented as follows. Note that we assume batch size one (to omit the batch index for simplicity), the activations have been already zero padded in fp32 format (or equivalently padded with Κ=128 in υ8 format), and the convolution stride is one.

Mathematical formula

Appendix B “ Details on Fused Quantization With Skip Connections

The activation inputs to the layers marked by the blue arrow in Figure 7 are as follows where layer res2b_branch2a is abbreviated as 2b2a in the equations below with similar abbreviations for the other layers.

Mathematical formula

1. The raw compute can be calculated as AVX-512-frequency * number-of-cores * number-of-FMAs-per-core * 2-operations-per-FMA * SIMD-vector-length / number-of-bits-in-numerical-format / number-of-instructions. Two 512-bit FMA units located in Ports 0 and 5 computing in parallel per core are available in the Intel Xeon Platinum processors, Intel Xeon Gold processors 6000 series and 5122 available. Other Intel Xeon Scalable processor stock keeping units (SKUs) have one FMA unit per core located in Port 0 (see Chapter 15 for details). Æ’p32, int16, and int8 FMAs require 1, 2, and 3 instructions, respectively, with the Intel AVX-512 instructions. The Intel Xeon Platinum 8180 has 28 cores per socket and 2 FMAs per core. The Æ’p32 OPS per socket are approximately 1.99-GHz-AVX-512-frequency * 28-cores * 2-FMA-units-per-core * 2-OPS-per-FMA * 512-bits / 32-bits / 1-instruction = 3.570 Æ’p32 TOPS. The int8 OPS per socket are approximately 2.17-GHz-AVX-512-frequency * 28-cores * 2-FMA-units-per-core * 2-OPS-per-FMA * 512-bits / 8-bits / 3-instruction = 5.185 int8 TOPS. The AVX-512 frequencies for multiple SKUs can be found here (these correspond to Æ’p64 operations ”the frequencies for lower numerical precision are higher and the ones used in the fp32 and int8 TOPS computations above are estimates). The AVX-512 max turbo-frequency may not be fully sustained when running high OPS workloads.

2. in practice these Ï…8 values are usually closer to their minimum than their maximum if they are activations preceded by the ReLU activation function.

3. Two 512-bit FMA units computing in parallel per core are available in Intel Xeon Platinum processors, Intel Xeon Gold processors 6000 series and 5122. Other Intel Xeon Scalable processor SKUs have one FMA unit per core.

For more complete information about compiler optimizations, see our Optimization Notice.

Categories: Artificial Intelligence
Tags: Artificial Intelligence, deep learning, machine learning, Sponsored

About Datafloq Sponsored

We regularly publish sponsored articles and we offer various possibilities, including advertorials or commercial thought leadership articles. If you are interested in promoting your business, startup or service, please download our media kit here.

Primary Sidebar

E-mail Newsletter

Sign up to receive email updates daily and to hear what's going on with us!

Publish
AN Article
Submit
a press release
List
AN Event
Create
A Job Post

Related Articles

Explaining data products lifecycle and their scope in management

March 28, 2023 By yash.mehta262

How BlaBlaCar Built a Practical Data Mesh to Support Self-Service Analytics at Scale

March 23, 2023 By Barr Moses

The need for extensive data to make decisions more effectively and quickly

March 23, 2023 By Rosalind Desai

Related Jobs

  • Software Engineer | South Yorkshire, GB - February 07, 2023
  • Software Engineer with C# .net Investment House | London, GB - February 07, 2023
  • Senior Java Developer | London, GB - February 07, 2023
  • Software Engineer – Growing Digital Media Company | London, GB - February 07, 2023
  • LBG Returners – Senior Data Analyst | Chester Moor, GB - February 07, 2023
More Jobs

Tags

AI Amazon analysis analytics application applications Artificial Intelligence BI Big Data business China Cloud Companies company costs crypto Data design development digital engineer environment experience finance financial future Google+ government Group health information machine learning mobile news public research security services share skills social social media software strategy technology

Related Events

  • 6th Middle East Banking AI & Analytics Summit 2023 | Riyadh, Saudi Arabia - May 10, 2023
  • Data Science Salon NYC: AI & Machine Learning in Finance & Technology | The Theater Center - December 7, 2022
  • Big Data LDN 2023 | Olympia London - September 20, 2023
More events

Related Online Courses

  • Webinar – How to harness financial data to help drive improved analytics and insights with Envestnet & AWS
  • Digital Transformation and the Impact on Business Models
  • World Data & Analytics Show Singapore
More courses

Footer


Datafloq is the one-stop source for big data, blockchain and artificial intelligence. We offer information, insights and opportunities to drive innovation with emerging technologies.

  • Facebook
  • LinkedIn
  • RSS
  • Twitter

Recent

  • Personalization Vs. Hyper-Personalization: Benefits, Limitations and Potential
  • Explaining data products lifecycle and their scope in management
  • Microsoft Power BI -The Future of Healthcare’s Most Important Breakthrough
  • The Big Crunch of 2025: Is Your Data Safe from Quantum Computing?
  • From Data to Reality: Leveraging the Metaverse for Business Growth

Search

Tags

AI Amazon analysis analytics application applications Artificial Intelligence BI Big Data business China Cloud Companies company costs crypto Data design development digital engineer environment experience finance financial future Google+ government Group health information machine learning mobile news public research security services share skills social social media software strategy technology

Copyright © 2023 Datafloq
HTML Sitemap| Privacy| Terms| Cookies

  • Facebook
  • Twitter
  • LinkedIn
  • WhatsApp

In order to optimize the website and to continuously improve Datafloq, we use cookies. For more information click here.

settings

Dear visitor,
Thank you for visiting Datafloq. If you find our content interesting, please subscribe to our weekly newsletter:

Did you know that you can publish job posts for free on Datafloq? You can start immediately and find the best candidates for free! Click here to get started.

Not Now Subscribe

Thanks for visiting Datafloq
If you enjoyed our content on emerging technologies, why not subscribe to our weekly newsletter to receive the latest news straight into your mailbox?

Subscribe

No thanks

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

Marketing cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!