There are quite a few specs to keep in mind when picking a new graphics card, and one of the more notable and important ones is undoubtedly the memory.
Over the past decade, the majority of GPUs used GDDR5 memory, but both AMD and Nvidia have now switched to GDDR6. However, these two are far from being the only types of graphics RAM to be used in GPUs over the years.
One relatively new technology that you don’t commonly see implemented in modern GPUs is HBM, followed by its successors – HBM2 and HBM2E.
So, what are HBM, HBM2, and HBM2E? What distinguishes them from each other and from GDDR memory? Are HBM-equipped GPUs better for gaming? We’ll answer all of that below!
Table of ContentsShow
What Is HBM?
As mentioned above, HBM is a type of SDRAM (synchronous dynamic random-access memory), much like GDDR, that you’ll find in some graphics cards today. The acronym stands for High Bandwidth Memory, so the name itself is a dead giveaway as to what the defining characteristic of HBM is – its bandwidth. But of course, that’s not all it has to offer.
In addition to having significantly greater bandwidth than DDR or GDDR memory, HBM also uses less power and takes up less space on the PCB as multiple memory dies are “stacked” on top of each other.
A single HBM die has a 1024-bit memory bus and a bandwidth of 128 GB/s, and when stacked, this makes for a 4096-bit memory bus, 512 GB/s of RAM, and a maximum capacity of 4 GB of memory per stack. Needless to say, these specs are leagues ahead of what the then-standard GDDR5 could do, especially back in 2015 when the first HBM-equipped GPUs were introduced.
However, only a handful of AMD GPUs ultimately featured HBM memory, including the Radeon R9 Fury, the Radeon R9 Nano, the Radeon R9 Fury X, and the Radeon Pro Duo. It was only with its successor, HBM2, that it saw a somewhat wider implementation.
What Is HBM2?
Moving on, we get to HBM2, and it is the second generation of High Bandwidth Memory that greatly improved upon the key features of its predecessor – it offered twice the bandwidth (256 GB/s) per die and a single die could support up to 8 GB of memory, plus it allowed for 8-die stacks, thus greatly increasing the overall performance potential of HBM2 compared to its predecessor.
As mentioned above, HBM2 has been implemented in a wider variety of GPUs over the years, including the AMD Radeon RX Vega series, the Radeon VII, a number of Radeon Pro GPUs, and Nvidia also implemented it in a couple of their workstation-oriented GPUs such as the Titan V, the Quadro GP100, along with a few others.
What Is HBM2E?
And then, we have the latest iteration of High Bandwidth Memory – HBM2E. It was first announced by JEDEC in 2018, supporting a bandwidth of up to 307 GB/s per die, allowing for a maximum stack size of 12 dies, and increasing the maximum memory per stack to a whopping 24 GB.
Since then, Samsung and SK Hynix have both announced their own variants of HBM2E. Both support stacks of up to 8 dies and 16 GB of memory, but they offer even greater bandwidth. Samsung’s HBM2E offers a bandwidth of 410 GB/s while SK Hynix’ takes it even further to 460 GB/s.
However, HBM2E hasn’t been implemented in any GPU so far, but that will likely change in the near future as AMD is starting to push hard to challenge Nvidia in the high-end GPU market.
HBM vs GDDR6 – Is HBM Good For Gaming?
And now, we have the inevitable question – is HBM better than GDDR6, and is it good for gaming? We already have a full article dedicated to the subject here, but here’s the gist of it:
HBM and its subsequent iterations are definitely superior to GDDR6 if we only look at the specs on paper. However, if we’re talking gaming specifically, HBM doesn’t really offer any major advantages when it comes to in-game performance. Why? Simply because modern games neither need that kind of memory performance nor are they optimized to take advantage of it.
In addition to that, HBM is also more expensive to manufacture than either GDDR5 or GDDR6, and this ultimately drives up the prices of GPUs that implement it.
That said, HBM is currently only really worth it for high-end workstations running GPU/memory-intensive applications that can actually benefit from the monstrous bandwidth, as it is there that HBM can truly shine.
And that would be about it for this article. Hopefully, you’ve found it helpful and it has cleared up any confusion that you may have had regarding the differences between different versions of HBM or just when it comes to HBM memory in general.
As always, if you think we skipped something important or made any errors, feel free to point it out in the comments and we’ll do our best to fix the article ASAP. Also, if you’re shopping for a new GPU right now, we also suggest checking out our selection of the best GPUs of 2022, as you’ll probably find something to fit your needs and budget there.