JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack
JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack
Ever since it debuted in 2015, High Bandwidth Memory (HBM) has promised to deliver huge amounts of RAM bandwidth at power consumption levels that traditional GDDR arrays can't match. It'south by and large kept that promise, albeit at pricing that has kept it out of accomplish for most consumer GPUs, and limited its applicability to the high finish of the marketplace. JEDEC has just announced an extension to the existing HBM2 standard, increasing overall density and increasing its speed, though information technology's not clear if it'll be enough to spark further adoption in GPUs.
According to the new standard, HBM2 now supports upward to 24GB per stack in a 12-How-do-you-do arrangement. Previously, HBM2 topped out at 16GB per stack in an 8-Hi organization (AMD's 7nm Vega-derived Radeon Instinct MI60 offers 32GB of HBM2 retention in four stacks, with a 4096-flake bus total and 1TB/southward of memory bandwidth). The new maximum transfer rate for HBM2 has been increased from 2Gbps per pin to 2.4Gbps, which works out to a total per-channel bandwidth of 307GB/s, upward from 256GB/southward. A 7nm Vega equipped with this RAM would, therefore, hit one.23TB/s of retentiveness bandwidth — not too shabby, by any stretch of the imagination, and a far weep from where nosotros were just a few years agone.
It'south not articulate, yet, how much runway HBM2 has outside these specialized markets. In a recent evaluation of the future HBM3 standard for exascale, the Exascale Calculating Project found that taking advantage of HBM3 will require updates and improvements over the simulated Knights Landing architecture that projection used to estimate the value of HBM3's increased bandwidth (expected to double over HBM2). This extension to the existing HBM2 standard actually delivers some of those gains early on, though the final version of HBM3 is expected to double available bandwidth, non just improve it past xx per centum. Still, the findings propose that further rearchitecting of fries to accept advantage of the massive bandwidth HBM makes available will be required.
Meanwhile, the consumer GPU market place has largely gotten along just fine without it. AMD undoubtedly saved ability by adopting HBM for its Fury and Vega GPUs. The technology has non followed the standard path for a new memory introduction. If it had, we'd at present see HBM as a ubiquitous choice on midrange cards, if not present throughout the entire stack. Instead, there'due south no suggestion that either AMD or Nvidia will tap it for next-generation hardware. Nvidia has already adopted GDDR6 for its RTX family, and AMD is expected to follow suit for Navi. This is non to say that HBM won't continue to play a role in the high-end of the market, but it seems to be finished as a mainstream GPU solution.
Now Read:
- Samsung Announces Loftier-Speed HBM2 Breakthrough, Codenamed Aquabolt
- SK Hynix Will Launch GDDR6 in 2018, Just What About HBM2?
- Samsung aims to conquer the retention market place with HBM3
Source: https://www.extremetech.com/computing/282556-jedec-extends-hbm2-standard-to-24gb-307gb-s-bandwidth-per-stack
Posted by: vogelsaind1971.blogspot.com
0 Response to "JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack"
Post a Comment