Inside Intel Knights Landing Architecture
By   |  January 15, 2016

Knights Landing is the codename for Intel’s forthcoming Xeon Phi processor dedicated to HPC. Few details have emerged until now, besides that i twill sport 72 cores, be able to do 240 threads in parallel and have a massive 8-billion transistor die. However, several key announcements have emerged since SC’15 in november, and the picture is getting more clear while the launch date of Q1 2016 approaches.

The 14nm successor to Knights Corner (1st gen Xeon Phi), Knights Landing implements AVX-512, Multi-Channel DRAM (MCDRAM), and a new CPU core based on Intel’s Silvermont architecture. Knights Landing is now shipping to Intel’s first customers and developers as part of their early ship program, and pre-production systems for demonstrating supercomputer designs are up and running. Knights Landing is ultimately ramping up for general availability in Q1 of 2016.

One Platform, Three Processors
First things first : instead of one product, Knights Landing will in fact, be launched as three different products, packaged differently to suit different usage scenarios. Given that the Knights Landing implementation is available as a standalone processor as well as a coprocessor, you might be thinking that Intel expects for a lot of the machinery built using Knights Landing will be a mix of Xeon and Xeon Phi systems clustered together and working side-by-side but not with the Xeon Phi being linked as a coprocessor to the Xeons

The base bootable Knights Landing chip has 16 GB of MCDRAM high bandwidth memory right on the package and DDR4 memory controllers to link out to a maximum of 384 GB of regular DRAM memory. The chip has two PCI-Express 3.0 x16 ports and one x4 root port as well as a southbridge chipset to link to various I/O devices. The Omni-Path interconnect is not on the chip itself, but is rather implemented in the package, with each PCI-Express x16 port having its own bi-directional, 100 Gb/sec ports. Each port can deliver 25 GB/sec of bandwidth in both directions. Integrating an InfiniBand or Omni-Path interconnect directly on a Xeon or Xeon Phi die is tricky, so Intel is integrating on the package first as a means of lowering the overall cost and power consumption in the Knights Landing platform.

The free-standing Knights Landing co-processor card will put a non-transparent bridge (NTB) chip on the card, linking to the processor through one PCI-Express x16 port, as a PCI-Express endpoint. This version of the chip will also have its DDR4 memory channels deactivated and will only have the 16 GB of MCDRAM near memory for the processor to access. It is unclear if all of the memory modes developed for Knights Landing will be supported in the coprocessor, but given the latencies of moving data over the PCI-Express bus, we suspect not. If the price on the free-standing Knights Landing cards is small enough, it is possible that these sell well, but we expect most enterprises, hyperscalers, and HPC centers will be interested in the self-booting versions of the Knights Landing chips, not the coprocessors.

Intel Omni-Path Architecture
Meanwhile Knights Landing’s partner in processing, Intel’s Omni-Path Architecture, has formally ben launched at SC15. Intel’s own take on a high bandwidth low-latency interconnect for HPC, Omni-Path marks Intel’s greatest efforts yet to diverge from InfiniBand and go their own way in the market for interconnect fabrics.

Intel System Scalable Framework
Ultimately Knights Landing and Omni-Patch Architecture are part of Intel’s larger efforts to build a whole ecosystem, which they’ve been calling the System Scalable Framework.

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index