Non-Volatile Memory (NVM) technology for Exascale computing – A position paper
By   |  June 18, 2014

NVM offers the promise of providing persistent memory close to the CPU that is fast, large and cost-effective. Making the rounds for a long time (since 2008 and counting), this proclamation is adding confusion about the characteristics of the technology. For example, if we consider Resistive RAM (RRAM), also called Memristor by HP [1], different sources contradict the capabilities of such memory as seen in this survey [2] indicating slow write operation while this presentation [3] reports writing latency similar to DRAM.

Nevertheless, despite the late arrival of the technology on the market, it cannot be ignored in the race to Exascale computers. HP, hoping to deliver operational systems by 2018, has made the most ambitious announcement.

It is important to note that the impact of NVM technology is made possible thanks to two other technologies: 3D stacking and on-chip photonics. The remainder of this paper will use an example of an NVM-based system such as the one proposed by HP [4]. We believe that other manufacturers are also engaged in similar efforts [5]. The expected characteristics of this kind of technology are as follows [6]:

1) Node NVRAM capacity significantly greater than DRAM capacity
2) Scaling down to less than 10 nm width per cell, ~ 32 Gbyte/cm2/layer by 2018
3) Scaling up to multiple (≥ 8) layers on chip, ~ 0.25 Tbyte/cm2/chip by 2018
4) Truly nonvolatile – many, many years
5) Random access at byte level
6) Fast cell write and erase (~ nanosec)
7) Low energy cell write and erase (~ picoJ)
8) Endurance > 1010 cycles, expectation is to exceed the 1018 cycles of professional DRAMs.

From the application point of view, having a large memory with close to DRAM performance and persistence is likely to introduce a revolution in application design.

This position paper explores the disruption that this kind of technology could bring to building efficient Exascale systems. One course of action would be to wait until this technology is on the market and broadly available. However, as analyzed in this paper, ignoring the potential of NVM may lead to the design of inefficient systems that are too difficult to manage and program due to a software stack aimed at patching too many hardware issues (e.g. small memory nodes, resilience…). Exascale project-wise, there are two main strategies:

1) Ignoring NVM: design a software stack based on an incremental evolution of node structures. These challenges have been well-identified [7].
2) Considering NVM within compute nodes: If one believes that they will reach the market by 2018-2020, software issues can be easier to handle when designing Exascale applications. As a result, availability in the upcoming years is very likely to be an obstacle in the race to Exascale.

[References]

[1] EntrepriseTech, HP Puts Memristors At The Heart Of A New Machine.

[2] Dong Li, Jeffrey S. Vetter, A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches – IEEE Transactions on Parallel and Distributed Systems, 23 May 2014. IEEE computer Society Digital Library.

[3] Olle Heinonen (Seagate) Duk Shin (UC Davis) GTZ (UC Davis), Memristors – A Revolution on the Horizon.

[4] Parthasarathy Ranganathan, From Microprocessors to Nanostores: Rethinking Data-Centric Systems – Computer, vol. 44, no. 1, pp. 39-48, January, 2011.

[5] Jim Stevens et al. An Integrated Simulation Infrastructure for the Entire Memory Hierarchy: Cache, Dram, Nonvolatile Memory, And Disk, Intel Technology Journal, Volume 17, Issue 1, 2013.

[6] Philippe Trautmann, Patrick Demichel, Hpc Vision For Cloud & Exascale – HP, 2011.

[7] EESI Deliverables.

Navigation

<12>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index