Kingston may not be a name that rolls off the tip of the tongue when you're talking about datacenter hardware vendors, but the company has come to have a major presence in datacenters through their DRAM modules. A lucrative and high-volume market on its own, the company has unspririsngly been attempting to pivot off of their success with datacenter DRAM into other datacenter products, but they've only met limited success thus far. Their other product lines – in particular enterprise/datacenter SSDs – have been servicable, but haven't been able to crack the market as a whole.

Still intent on slicing out a larger portion of the datacenter SSD market, Kingston has decided to raise their profile by introducing SSDs that are based around the needs of their existing DRAM customers. That means that the company's new DC500 family of SSDs is intended for second-tier cloud service providers and system integrators, rather than the top hyperscalers like Google, Microsoft, Amazon, etc. This also means that the new drives are SATA SSDs, because in this market segment – which relies more heavily on commodity components and platforms than Open Compute Project-style thorough customization – there is still significant demand for SATA SSDs.

Using NVMe SSDs adds to platform costs in the form of expensive PCIe switches and backplanes, the drives themselves are each more expensive than a SATA drive of the same capacity, and power efficiency is often better for SATA than NVMe. PCIe SSDs make it possible to cram a lot of storage performance into a smaller number of drives and servers, but where the emphasis is more on capacity and cost effectiveness, SATA still has a place.

The SATA interface itself is stuck at 6Gbps, but the technology that goes into SATA SSDs continues to evolve with new generations of NAND flash memory and new SSD controllers. Kingston's new DC500 family of enterprise SATA SSDs are our first look at Phison's new S12 SSD controller (specifically, the S12DC variant), the replacement for the S10 that has been on the market for over five years. (S11 is Phison's current DRAMless SATA controller.) While consumer SATA SSD controllers have mostly dropped down to just four NAND channels, the S12DC still has eight channels, but more for the sake of supporting high capacities than for improving performance. The S12DC officially supports 8TB, but Kingston isn't pushing things that far yet. The S12DC controller is fabbed on a 28nm process and brings major improvements to the error correction capabilities including Phison's third-generation LDPC engine.

The DC500 family uses Intel's 64-layer TLC NAND flash memory, a break from Kingston's usual preference for Toshiba NAND. 96/92-layer TLC has started to show up in the client/consumer SSD market, but it's still a bit early to be seeing it in this part of the enterprise storage market.

The DC500 family includes two tiers: the DC500R for read-heavy workloads (endurance rating of 0.5 DWPD) and the DC500M for more mixed read/write workloads (endurance rating of 1.3 DWPD). Kingston says the Intel NAND they are using is rated for about 5000 program/erase cycles, so with a warranty for a bit less than 1000 total drive writes on the DC500R they're clearly allowing for quite a bit of write amplification.

NVMe SSDs have mostly killed off the market for very high endurance SATA drives, because applications that need to support several drive writes per day tend to need higher performance than SATA can support (and as drive capacities increase, there's no longer enough time in a day to complete more than a few drive writes at ~0.5GB/s). Micron still offers a 5 DWPD SATA model (5200 MAX) but most other brands now top out around 3 DWPD for SATA drives. Those 3 DWPD and higher drives only account for about 20% of the market, so Kingston isn't missing out on too many sales by only going up to 1.3 DWPD with the DC500 family. The introduction of QLC NAND has helped lower the entry-level of this market down to around 0.1 DWPD, but Kingston doesn't have anything to offer at that level yet.

Kingston DC500 Series Specifications
Capacity 480 GB 960 GB 1920 GB 3840 GB
Form Factor 2.5" 7mm SATA
Controller Phison PS3112-S12DC
NAND Flash Intel 64-layer 3D TLC
DRAM Micron DDR4-2666
Sequential Read 555 MB/s
DC500R 500 MB/s 525 MB/s 525 MB/s 520 MB/s
DC500M 520 MB/s 520 MB/s 520 MB/s 520 MB/s
Random Read 98k IOPS
DC500R 12k IOPS 20k IOPS 24k IOPS 28k IOPS
DC500M 58k IOPS 70k IOPS 75k IOPS 75k IOPS
Power Read 1.8 W
Write 4.86 W
Idle 1.56 W
Warranty 5 years
DC500R 438 TB
0.5 DWPD
876 TB
0.5 DWPD
1752 TB
0.5 DWPD
3504 TB
0.5 DWPD
DC500M 1139 TB
1.3 DWPD
2278 TB
1.3 DWPD
4555 TB
1.3 DWPD
9110 TB
1.3 DWPD
Retail Price (CDW) DC500R $104.99 (22¢/GB) $192.99 (20¢/GB) $364.99 (19¢/GB) $733.99 (19¢/GB)
DC500M $125.99 (26¢/GB) $262.99 (27¢/GB) $406.99 (21¢/GB) $822.99 (21¢/GB)

The DC500R and DC500M are available in the same set of usable capacities ranging from 480GB to 3840GB, but they differ in the amount of spare area included, which is what allows the -M to have higher write endurance and higher sustained write performance. For sequential IO, the -R and -M versions are rated to deliver essentially the same performance, bottlenecked by the SATA link. The same is true for random reads, but steady-state random write performance is limited by the flash itself and varies with drive capacity and spare area. The DC500M models all have higher random write performance than all of the DC500R models.

Power consumption is rated at a modest 1.8 W for reads and a fairly typical 4.86 W for writes. Low-power idle states are usually not included on enterprise drives, so the DC500s are rated to idle at 1.56 W.

Left: DC500R 3.84 TB, Right: DC500M 3.84 TB

The DC500R and DC500M both use the same plain metal case, but the PCBs inside have some minor layout changes due to the differences in overprovisioning. Our 3.84TB samples feature raw capacities of 4096GB for the DC500R and 5120GB for the DC500M, so the -R versions have comparable overprovisioning to consumer SSDs while the -M versions have about three times as much spare area. The extra flash on the DC500M also requires it to have more DRAM: 6GB instead of the 4GB found on the DC500R 3.84TB.

Physically, the memory is laid out differently between the two drives. The 3.84TB DC500R has a total of 16 packages with 256GB each of NAND, and the 3.84TB DC500M uses 10 packages of 512GB each rather than mix packages of different capacities. In both cases this is Intel NAND packaged by Kingston. Since the -M has fewer NAND packages, it also gets away with fewer of the small TI multiplexer chips that sit next to the controller. The -M also has two fewer tantalum caps for power loss protection despite having more total NAND and DRAM.

The Competition

There are plenty of competing enterprise SATA SSDs based on 64-layer 3D TLC, but many of them have been on the market for quite a while; Kingston's a bit late to market for this generation. Samsung's SATA SSDs launched last fall are the only current-generation drives we have to compare against the Kingston DC500s, and all of our older enterprise SATA SSDs are far too outdated to be relevant.

The Samsung 883 DCT falls somewhere in between the DC500R and DC500M, with a write endurance of 0.8 DWPD (compared to 0.5 and 1.3 for the Kingston drives). The Samsung 860 DCT is a bit of an oddball since it lacks one of the defining features of enterprise SSDs: power loss protection capacitors. It also has quite a low endurance rating of just 0.2 DWPD, which is almost in QLC territory. Despite these handicaps, it still uses Samsung's excellent controller and firmware, and is tuned to offer much better performance and QoS on server workloads than can be expected from the client and consumer SSDs it superficially resembles.

To give a sense of scale, we've also included results for Samsung's entry-level datacenter NVMe drive, the 983 DCT, specifically the 960GB M.2 model. Some relevant SATA competitors that we have not tested include the Intel D3-S4510 and Micron 5200 ECO, both using the same 64L TLC as the Kingston drives but with different controllers.

Test System

Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.19.8
fio version 3.12
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet.

The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, and some Silverstone FQ141 case fans have been installed to help exhaust hot air from the top of the cabinet.

The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.

Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.

Our drive power measurements are conducted with a Quarch HD Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.

Performance at Queue Depth 1
Comments Locked


View All Comments

  • KAlmquist - Tuesday, June 25, 2019 - link

    Good points. I'd add that there is an upgrade to SATA called "SATA Express" which basically combines two PCIe lanes and traditional SATA into a single cable. It never really took off for the reasons you explained: it's simpler just to switch to PCIe.
  • MDD1963 - Tuesday, June 25, 2019 - link

    It would be nice indeed to see a new SATA4 spec at SAS speeds, 12 Gbps....
  • TheUnhandledException - Saturday, June 29, 2019 - link

    Why? Why not just use PCIe directly. Flash drives don't need the SATA interface and ultimately the SATA interface becomes PCIe at the SATA controller. It is just adding additional pointless translation to fit a round peg into a square hole. Connect your flash drive to PCIe and it is as slow or fast as you want it to be. 2x PCIe 3.0 you got ~2GB/s to work with, 4 lanes gets you 4GB/s. Upgrade to PCIe 4 and you now have 8 GB/s.
  • jabber - Wednesday, June 26, 2019 - link

    They could stay with 6GBps just fine. I'd say work on reducing the latency.

    Bandwidth is done. Latency is more important now IMO. Ultra low latency SATA would do fine for years to come.
  • RogerAndOut - Friday, July 12, 2019 - link

    In an enterprise environment, the 6Gbps speed is not much of an issue as deployment does not involve individual drives. Once you have 8,16,32 etc. in some form of RAID configuration the overall bandwidth increase. Such systems may also have NVMe based modules acting as a cache to allow fast retrieval of frequently access blocks and to speed up the 'commit' time of writes.
  • Dug - Tuesday, June 25, 2019 - link

    I would like to see the Intel and Micron Pro included.
    We need drives with power loss protection.
    And I don't think write heavy is regulated to nvme territory. That's just not in the cards for small businesses or even large businesses. 1 because of cost, 2 because of size, 3 because of scalability.
  • MDD1963 - Tuesday, June 25, 2019 - link

    1.3 DWPD endurance (9100+ TB of writes! for a 3.8 TB drive? Impressive! $800+...lower it to $399, and count me in! :)
  • m4063 - Tuesday, September 8, 2020 - link

    LISTEN! The most important feature, and reason to buy these drives, is they have power-loss-protected (PLP) cache, not for protecting your data, BUT FOR SPEED!
    In my believe the most important thing about PLP is it should improve direct synchronous I/O (ESX and SQL) because the drive can report back that the data is "written to disk" as soon as the data hit the cache, where a non PLP drive actually need to write the data to the nand before reporting "OK"!
    And for that reason it's obvious the size of the PLP protected cache is pretty important.
    None of those two features are considered and tested in this review, which is very criticizable.
    This is the main-reasons you should go for these drives. I've asked Kingston about the PLP protected cache size and I got:
    SEDC500M/480 - 1GB
    SEDC500M/960 - 2GB
    SEDC500M/1920 - 4GB
    These sizes could play a huge different in synchronous I/O intensive systems/applications.
    Anatech: please cover these factors in your tests/review!
    (admittedly, I have done any benchmark myself In lack of PLP drives)

Log in

Don't have an account? Sign up now