QLC Goes To 8TB: Samsung 870 QVO and Sabrent Rocket Q 8TB SSDs Reviewedby Billy Tallis on December 4, 2020 8:00 AM EST
This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.
The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.
|Average Throughput for last 16 GB||Overall Average Throughput|
Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.
On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.
Working Set Size
Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.
When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.
We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.
When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.
The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.
The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.
Post Your CommentPlease log in or sign up to comment.
View All Comments
heffeque - Friday, December 4, 2020 - link"Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?"
Well on a traditional HDD you also have to hope that they put Helium in it and not Mustard gas by mistake. It "can" happen, but how would we know if nobody opens every single HDD drive?
In a serious note, if a drive has such a serious firmware bug, rest assured that someone will notice, that it will go public quite fast and that it will end up getting fixed (like it has in the past).
Spunjji - Monday, December 7, 2020 - linkThanks for responding to that "how do you know unless you look" post appropriately. That kind of woolly thinking really gets my goat.
joesiv - Monday, December 7, 2020 - linkWell, I for one would rather not be the one that discovers the bug, and lose my data.
I didn't experience this one, but it's an example of a firmware bug:
Where I work, I'm involved in SSD evaluation. A drive we used in the field had a nasty firmware bug, that took out dozens of our SSD's after a couple years of operation (that was well within their specs), The manufacturer fixed it in a firmware update, but not until a year + after release, so we shipped hundreds of product.
Knowing that, I evaluate them now. But for my personal use, where my needs are different, I'd love it if at least a very simple check was done in the reviews. It's not that hard, review the SSD, then check to see if the writes to NAND is reasonable given the workload you gave it. It's right there in the smart data, it'll be in block sizes, so you might have to multiply it by the block size, but it'll tell you a lot.
Just by doing something similar, we were able to vet a drive that was writing 100x more to NAND than it should have been, essentially it was using up it's life expectancy 1% per day! Working with the manufacturer, they eventually decided we should just move to another product, they weren't much into firmware fixes.
Anyways, someone should keep the manufactuers honest, why not start with the reviews?
Also, no offence, but what is the "wolly thinking" are you talking about? I'm just trying to protect my investment and data.
heffeque - Tuesday, December 8, 2020 - linkAs if HDD didn't have their share of problems, both firmware and HW (especially the HW). I've seen loads of HDD die in the first 48 hours, then a huge percentage of them no later than a year afterwards.
My experience is that SDD last A LOT longer and are A LOT more reliable than HDD.
While HDD had been braking every 1-3 years (and changing them was a high cost due to the remote location, and the high wages of Scandinavian countries), when we changed to SSD we had literally ZERO replacements to perform since then so... can't say that the experience with hundreds of SSD not failing vs hundreds of HDD that barely last a few years goes in favor of HDD in any kind of measure.
In the end, paying to send to those countries a slightly more expensive device (the SSD) has payed for itself several-fold in just a couple of years.
MDD1963 - Friday, December 4, 2020 - linkI've only averaged .8 TB per *month* over 3.5 years....
joesiv - Monday, December 7, 2020 - linkOut of curiousity, how did you come to this number?
Just be aware that SMART data will track different things. You're probably right, but SMART data is manufactuer and model dependant, and sometimes they'll use the attributes differently. You really have to look up the smart documentation for your drive, to be sure they are calculating and using the attributes the way your smart data utility is labeling them as. Some manfacturers also don't track writes to NAND.
I would look at:
"writes to nand" or "lifetime writes to flash" - which for some kingston drives is attribute 233
"SSD Life Left" - which for some ADATA drives is 232 (ADATA), and Micron/Crucial might be is 202), this is actually usually calculated based on average block erase count against the rated block erase counts the NAND is rated for (3000ish for MLC, much less for 3d nand)
A lot of maufactuers haven't included the actual NAND writes in their SMART data, so it'd be hard to get to, and should be called out for it (Delkin, Crucial).
"Total host writes" is what the OS wrote, and what most viewers assume is what manufactuers are stating when they're talking about drive writes per day or TB a day. That's the amount of data that is fed to the SSD, not what is actually written to NAND.
Also realize that wear leveling routines can eat up SSD life as well. I'm not sure how SLC mode that newer firmwars have affects life expectancy/nand writes actually.
stanleyipkiss - Friday, December 4, 2020 - linkHonestly, if the prices of these QLC high-capacity drives would drop a bit, I would be all over them -- especially for NAS use. I just want to move away from spinning mechanical drives but when I can get a 18 TB drive at the same price of a 4-8 TB SSD, I will choose the larger drive.
Just make them cheaper.
Also: I would love HIGHER capacity, and I WOULD pay for it... Micron had some drives and I'm sure some mainstream drives could be made available -- if you can squeeze 8TB onto M.2 then you could certainly put 16TB on a 2.5 inch drive.
DigitalFreak - Monday, December 7, 2020 - linkAsk and ye shall receive.
Xex360 - Friday, December 4, 2020 - linkThe prices don't make any sense, you can get multiple drives for the same capacity but less money and more performance and reliability, and should cost more because they use more material.
inighthawki - Friday, December 4, 2020 - linkAt least for the sabrent drive, M.2 slots can be at a premium, so it makes perfect sense for a single drive to cost more than two smaller ones. On many systems being able to hook up that many drives would require a PCIe expansion card, and if you're not just bifurcating an existing 8x or 16x lane you would need a PCIe switch which is going to cost you hundreds of dollars at minimum.