|Knights Mill SKUs|
|7235||64||256||250 W||1.3-1.4 GHz|
|7285||68||272||250 W||1.3-1.4 GHz|
|7295||72||288||320 W||1.5-1.6 GHz|
All three parts have 36 PCIe lanes Gen 3.0 as well as 16 GiB of high-bandwidth Multi-Channel DRAM (MCDRAM). Additionally, those parts also support up to 384 GiB of hexa-channel DDR4 memory.
As you might imagine the system architecture is almost identical. What has changed is pipeline implementation:
|Differences Between KNL vs KNM|
|Arch||Single Precision||Double Precision||Variable Precision|
|Knights Landing||2 Ports x 1 x 32
|2 Ports x 1 x 16
|Knights Mill||2 Ports x 2 x 32
|1 Port x 1 x 16
|2 Ports x 2 x 64
Those new operations are supported through the introduction of three new AVX-512 extensions:
- AVX5124FMAPS – Instructions add vector instructions for deep learning on floating-point single precision
- AVX5124VNNIW – Instructions add vector instructions for deep learning on enhanced word variable precision
- AVX512VPOPCNTDQ – Instructions add double and quad word population count instructions.
It’s interesting to note that Intel has no plans on actually integrating the first two extensions into their mainstream processors. Only the VPOPCNTDQ will make it into future Ice Lake server parts.