April 1, 2023

All businesses must make big bets from time to time, and the successful ones know when to walk away from a gamble that didn’t turn out the way they had hoped. Intel did just this at the end of July when the company announced the “wind down” of its Optane persistent memory products.

To get a better handle on the coming Intel Optane shutdown, let’s dive into the reasons behind Intel’s decision in order to launch the particular technology plus how factors in the market ultimately led to its demise .

Rationale and expectations

Optane was an attempt to generate a sizable performance gap between Intel’s processors and those from AMD .

Over the past decade, the particular vendor used what was commonly called “The Intel Treadmill” in order to keep ahead of other processor manufacturers. The model enabled Intel to differentiate itself from competitors based on the following guidelines:

  • Sell cpus based upon an advanced process technology at a premium. Reap big profits.
  • Reinvest those profits in new leading-edge technology to keep in front of all other processor chip makers.
  • Use that brand new leading-edge procedure to reap higher earnings with the next process technology.

But the economics of semiconductor production slowly changed, removing the model from Intel’s hands.

The particular number of wafers a good economical leading-edge wafer fabrication plant must process has increased steadily, moving well prior what Intel’s processors require. Add the particular mushrooming cost of building one of these fabs and it’s clear that will Intel couldn’t build the leading-edge fab and only use a fraction of its output — that would render the company unprofitable. As a result, Intel’s processor technologies fell behind that of the Taiwan Semiconductor Manufacturing Company, which produces wafers for AMD plus other firms. Intel CEO Pat Gelsinger has since addressed that shortfall by acquiring chip maker Tower Semiconductor .

In the particular meantime, Intel needed a way to widen the competitive gap with equivalent or better technology. Intel’s daring plan was in order to make a significant architectural change, named 3D XPoint memory , and sell it under the particular Optane brand.

The Optane strategy

Intel designed Optane to replicate a change that will started around 2004. That year, NAND flash prices permanently fell below DRAM prices, which made it reasonable for all computer systems to incorporate SSDs to improve the price and performance ratios of their systems. By adding an SSD, a system could achieve the same performance with less MASS and less money. Server farms along with SSDs often could reduce the number of servers they had.

Major overall performance improvements resulted by plugging a growing gap in the memory space and storage hierarchy with a NAND flash SSD. A flash SSD fit perfectly between HDDs and DRAM. As the result, SSDs rapidly became a key component in most data centers.

Intel decided to fill the growing space between MASS and NAND SSDs with a new memory technology. Emerging memories — such as magnetoresistive RAM (MRAM), phase-change storage (PCM), resistive RAM plus ferroelectric RAM — fit the bill because they offered features Intel can add to the memory and storage space hierarchy in order to improve cost and efficiency. Most emerging memories have a smaller bit cell size than DRAM, so they should be cheaper to produce and purchase than DRAM. They’re also faster compared to NAND flash. Plus, they’re nonvolatile — they bring persistence closer to the processor, which could streamline techniques unable to tolerate data loss from a power failure.

Intel had been researching PCM since the 1960s, and introduced its first PCM nick in 1970 , so this technologies reasonably fit into that gap. If Intel can do this in a way that made Optane work only along with Intel processors, it could drive a wedge between itself and its competition that might last for a very long time.

Importance associated with DIMMs

The natural fit for this approach was to produce Optane with a proprietary interface — not a good SSD interface — Intel could use in order to thwart any efforts simply by competitors to use this particular technology themselves. Although SSDs were Intel’s first Optane product, they were only intended to build volume production of 3D XPoint memory early in the particular game.

SOLID STATE DRIVE users prefer faster SSDs, and Intel produced faster SSDs using Optane. These SSDs didn’t tap into all the speed offered by THREE DIMENSIONAL XPoint memory space, though, because the SSD interface has been too slow.

It’s hard to put a price on velocity, and users decided that will a relatively minor speed increase wasn’t worth the premium price Intel wanted to charge. This meant the particular SSDs did not sell enough volume to drive the required production scale.

The larger plan was to create a module that ran at near-DRAM speeds. Intel chose in order to adapt the standard DDR4 memory bus to Optane’s needs, and to keep the particular changes secret as a competitive edge.

To this end, the company developed the DDR-T user interface, which was DDR4 with a few additional signals to support a transaction protocol. The coach required support at both the DIMM and CPU side of the particular interface, providing Intel along with a walled garden .

These DIMMs enabled 3D XPoint storage to provide its entire speed advantage to the system. Intel released them during its second-generation Xeon Scalable Processor launch in earlier 2019.

But to gain adoption, these slower-than-DRAM DIMMs had in order to be sold at lower-than-DRAM costs, and the cost to make them started out higher than the particular level Intel needed in order to price all of them.

Economics: The stumbling block

Any new memory technology can only reach the necessary cost target if it may ramp to production volumes like all those of MASS. That’s how NAND adobe flash prices crossed below DRAM prices.

A single-level cellular NAND display chip has always been approximately half the size of the DRAM counterpart, assuming both are built using the same process geometry plus hold the same number of bits. Yet, NAND flash costs didn’t compete with those associated with DRAM until 2004. NAND flash wafer production in 2004 reached one third those of MASS, according to estimates from semiconductor market research firm Objective Analysis. That’s when the economies of level tipped in favor of NAND.

This is when Intel produced its large gamble in order to subsidize the initial effort till consumption levels rose high enough to offer those financial systems of size. So far, this hasn’t happened.

Consequently, Intel has lost more than $7 billion within its efforts to squeeze Optane’s expenses down. It seems upper management has decided this was as far as these people desired to go and discontinued the product.

Chart of Intel Optane losses
Intel has dropped a lot more than $7 billion on Optane, according to estimations from Objective Analysis.

Could things possess turned out differently? Probably not by much. Had Intel priced Optane SSDs more aggressively, they might have gained more popularity, but then the losses would have already been steeper. In case we assume management budgeted the gamble’s losses at a fixed amount, then the particular Intel Optane shutdown would have occurred that much earlier.

Optane’s legacy and what comes next

Will current Optane users be left in the lurch? In a statement read at the Flash Memory Summit, Intel managed to get clear the organization will support existing users, so that’s not an issue.

However , companies who depended upon Optane with regard to fast storage will need to migrate to another a lot more expensive product for future designs, the easiest being a good NVDIMM . Companies that don’t need persistence but took benefit of Optane’s larger memory space sizes will also need to pay more to make use of DRAM instead for their next system iteration. Neither move should break the bank for these companies, but their profits will be somewhat lower.

Had Intel priced Optane SSDs a lot more aggressively, they will might have got gained more popularity, but then the deficits would have been steeper.

Optane does leave behind a positive legacy. The industry learned a lot from its introduction. The particular Compute Express Link (CXL) interconnect may have been designed with Optane in mind, plus the Storage Networking Industry Association developed a Nonvolatile Memory Programming Model that will promises in order to speed up many other forms of storage space. This model will be a boon as cpus move to new process technologies that incorporate nonvolatile MRAM caches. MRAM should become the norm as chip process systems continue to shrink and static RAM MEMORY fails to shrink along with them.

Optane opened the industry’s eyes in order to the idea that different storage speeds require a vastly different bus approach than the particular fixed-speed path the industry has followed given that synchronous DRAM within the early 1990s. In addition, Optane taught the industry that servicing interrupts with slow context switches is inadequate regarding handling fast and slow memory.

IT also started to reuse the term non-uniform memory architecture to describe memory systems that combine faster plus slower memory types. This particular approach enables memory on both ends of the CXL in order to map almost seamlessly into a processor’s memory space.

Perhaps Intel will share some of what it learned about PCM manufacture or transactional MASS interfaces that will help others in the industry. No matter what happens, more people are open to the notion associated with adding another memory layer towards the memory space and storage hierarchy.

Leave a Reply

Your email address will not be published. Required fields are marked *