What to expect from the next generation of RAM

Ddr Ram

DDR4 memory has been available for some time now, but with support limited to the X99 architecture, it thus far hasn't overtaken DDR3 as the go-to memory to use. And it's still far more expensive than DDR3. But as newer Intel chipsets support DDDR4 and it becomes more ubiquitous in the coming year, it's a good time to look ahead to the memory types we might be using after DDR4, and Extremetech has a great primer on three new and upcoming memory technologies: Wide I/O, High Memory Bandwidth (HMB) and Hybrid Memory Cube (HMC).

Wide I/O (and Wide I/O 2) is a high-bandwidth, low-power system designed (and most useful) for mobile SoCs. The standard has been backed by Samsung and other smartphone manufacturers as high-res handheld displays require lots of bandwidth but using as little power as possible is critical to battery life. Wide I/O is the first version of the standard, but it's likely that Wide I/O 2 or 3 is the version that actually makes it to market. No major devices are expected to ship with Wide I/O in the first half of 2015, but late 2015 may see the standard gain some limited ground.

According to Crucial, DDDR4 bandwidth maxes out at about 25.6 GB/s. Wide I/O, on the other hand, has a bandwidth of 12.8 GB/s, at least according to an old Xbitlabs post. Wide I/O 2 or 3 may offer significantly more bandwidth, and keep in mind that this is a technology designed for power efficiency first and foremost. It could eventually be a big deal for gaming laptops and other portable hardware.

Next we have Hybrid Memory Cube (HMC), a joint standard from Intel and Micron that offers significantly more bandwidth than Wide I/O 2 but at the cost of higher power consumption and price. HMC is a forward-looking architecture designed for multi-core systems, and is expected to deliver bandwidths of up to 400GB/s, according to Intel and Micron. Production could begin next year, with HMC commercially available in 2017.

Finally, High Bandwidth Memory is a specialized application of Wide I/O 2, but explicitly designed for graphics. (Both AMD and Nvidia plan to adopt it for their next-generation GPUs.) HMB can stack up to eight 128-bit wide channels for a 1024-bit interface, allowing for total bandwidth in the 128-256GB/s range. In other words, it's not as cheap or power efficient as Wide I/O, but it should be cheaper than HMC.

Also, since it's designed explicitly for high-performance graphics situations, future GPUs built with HBM might reach 512GB/s to 1TB/s of main memory bandwidth. That's a not insignificant upgrade over the current top-end 336GB/s Titan Black.

If you enjoy digging into the nitty gritty of future memory formats (and who doesn't really?), check out Extremetech's full article for a deeper dive.

TOPICS
Bo Moore

As the former head of PC Gamer's hardware coverage, Bo was in charge of helping readers better understand and use PC hardware. He also headed up the buying guides, picking the best peripherals and components to spend your hard-earned money on. He can usually be found playing Overwatch, Apex Legends, or more likely, with his cats. He is now IGN's resident tech editor and PC hardware expert.