New Memories Take Center Stage

New Memories Take Center Stage

In the same way that various prophets of doom foretell the imminent demise of Moore’s Law, we often hear that conventional memory technologies are going to run out of steam soon.

However, the semiconductor industry is highly-skilled at extending its existing architectures rather than making the leap to shiny new ones with apparently compelling advantages. Thus, incremental advances in conventional technology have delayed the introduction of a raft of exciting new memory technologies.

The prevailing wisdom in the industry seems to be that the enormous investment necessary to bring new memory technologies to volume is prohibitive – effectively postponing them indefinitely. Shrinking a conventional technology by one node can cost hundreds of millions of dollars. Introducing a brand new one could be significantly more expensive and much more risky.

This was made clear recently at an Applied Materials-supported panel discussion at IEDM. The panel consensus was pragmatic: evolutionary advances in conventional technology, coupled with fast-maturing methods of stacking multiple die in a single package, can meet performance, power and cost targets for the next several years.

However, another panel, made up of experts from next-generation memory companies, met more recently to address the same questions. The meeting, titled “Total Recall,” was sponsored by the MIT Club of Northern California and was hosted by Applied Ventures, the venture capital arm of Applied Materials. I recommend you follow the links to their websites to learn more about their technologies. It’s fascinating reading.

Serious Players


Dr. H.-S. Philip Wong (moderator)Dr. Rob AitkenIshai NavehDr. Rajiv RanjanDr. Jon SlaughterDr. Steve HudgensDr. Louis Parrillo
Stanford UniversityARM HoldingsAdesto TechnologiesAvalanche TechnologyEverspin TechnologiesOvonyxUnity Semiconductor
EverythingAgnosticConductive-Bridging RAMMagnetoresistive RAMPhase-Change MemoryResistive RAM

Balancing the memory guys on the panel was Dr. Rob Aitken, fellow at microprocessor designer ARM Holdings plc, a company that Rob described as the “consumers of the next big thing.” It’s companies like ARM that will shoulder the risk of switching memory technologies, so turning experts like Rob into champions for a new technology is critical to its success.

These companies are serious players with real, working devices. Everspin is in production, albeit at relatively low volumes. Adesto has announced a foundry partner. Ovonyx has inked licensing deals with major chipmakers. Unity has signed up Seagate and Micron as development partners. They aren’t peddling vaporware.

But when, asked the moderator, Stanford’s Professor H-S Philip Wong, will the tipping point be reached that pushes one or more into the mainstream?

Let’s take a step back for a minute. What’s wrong with the memories we have now: SRAM, DRAM and Flash?

What's Wrong with Today's Technology?

SRAM is the speed demon, but the problem is that SRAM transistors take a lot of real estate, sometimes more than half the chip area. They’re also the most difficult transistors on the chip to run reliably at low voltages. In fact, they’re probably the biggest barrier to microprocessor voltage scaling today. However, as ARM’s Rob Aitken pointed out, there are no viable alternatives because nothing else is capable of keeping up with the gigahertz clock cycles of the CPU.

memory hierachy
memory hierachy
DRAM is next in the memory hierarchy. The problem is that DRAM cells are leaky – you have to keep refilling the capacitor every hundred milliseconds or so otherwise the data simple evaporates. This wastes time – you can’t use the memory during the refresh cycle – and power. All those leaked electrons account for 30% of DRAM power consumption today, according to Adesto’s Ishai. As you scale down, these twin problems get progressively worse.

The twin shortcomings of Flash* are speed and lifetime, or endurance. Where SRAM and DRAM have effectively unlimited endurance (at least 1015 cycles), a flash chip will start to lose cells after as few as 10,000 cycles. That is the primary reason Flash isn’t used for the intensive tasks that SRAM and DRAM excel at. This degradation is generally hidden from the user by health-monitoring controller circuitry that “walls off” parts of the chip to prevent errors, but reliability remains an issue. In addition, although extremely non-volatile compared to SRAM and DRAM, leakage means that after ten years or so, Flash will self-erase. This is hardly an issue for active use, but does mean Flash isn’t a good archival storage method.

The speed problem is the penalty for the remarkable cell density of Flash – six to ten times better even than DRAM – is that data must be written and read (“flashed”) in large blocks. It’s like picking up a dictionary when you only want a single word. This speed limitation isn’t a problem today because of the memory hierarchy: we have SRAM and DRAM to do the rapid stuff, but it does explain why we don’t use Flash as a “universal memory.”

Universal Memory

So what would a “universal memory” look like? It would be ultra-fast, unlimited endurance, low power, non-volatile, scalable and very, very cheap.

Sadly, the physics are against it, said Adesto’s Ishai. “Universal memory is a 40-year myth” said Everspin’s Slaughter, adding that in addition, there was “no point in universal memory.” Each of the technologies we use today excels at its assigned task. Why force a compromise technology into use for all three that would result in a poorer experience for the end user?

Of all the new memories, MRAM seems to be closest to universal. It has much lower power consumption than DRAM, truly unlimited endurance and should retain data for 20 years. But even so, it’s not quite as fast as SRAM and can’t approach the cell density of Flash. Moreover, incorporating magnetic materials onto chips is unknown and scary territory, according to ARM’s Rob Aitken.

So much for universal memory. But let’s return to Professor Wong’s question: what will the tipping point be that pushes a new memory technology into the mainstream?

Tipping Point

Given the immense barrier, Jon Slaughter believes there won’t be one. Instead, new memories will find cracks in the barrier: niche applications where some special quality of a new memory will make it irresistible. Ishai Nevah expanded on that, pointing out that qualification – the long process of establishing good production yields and proving field reliability – means that they expect years of slow but cumulative population growth until a critical mass of data and experience is reached, after which the risk of using a new memory in a mainstream product becomes acceptable. Ovonyx’ Hudgens said that his company fabricated their own devices in-house, specifically to help them along the path towards qualification.

It’s nice to know that whenever conventional memory technologies do finally run out of steam, some quality alternatives will be just offstage, quietly maturing and waiting for their turn in the spotlight.


* It’s worth noting that there are actually two types of Flash: NAND and NOR. NOR works more like a non-volatile version of DRAM in that you don’t have to read a whole block to get at one bit. However, with low endurance, slow write speeds and a relatively poor bit density, NOR has fallen out of favor and, like this paragraph, is now just a footnote. Today, Flash means NAND Flash.

Receive updates in your inbox!

Subscribe Now

Want to join the discussion?

Add new comment:*

* Comments must adhere to our Discussion Guidelines and Rules of Engagement.

You can also fill out this form to contact us directly and we will get back to you.