6 TB NAS Drives: WD Red, Seagate Enterprise Capacity and HGST Ultrastar He6 Face-Off
by Ganesh T S on July 21, 2014 11:00 AM ESTIntroduction and Testbed Setup
The SMB / SOHO / consumer NAS market has been experiencing rapid growth over the last few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. The good news is that the growth is expected to accelerate in the near future (thanks to increasing amounts of user-generated data through the usage of mobile devices).
Back in July 2012, Western Digital began the trend of hard drive manufacturers bringing out dedicated units for the burgeoning SOHO / consumer NAS market with the 3.5" Red hard drive lineup. The firmware was tuned for 24x7 operation in SOHO and consumer NAS units. 1 TB, 2 TB and 3 TB versions were made available at launch. Later, Seagate also jumped into the fray with a hard drive series carrying similar firmware features. Over the last two years, the vendors have been optimizing the firmware features as well as increasing the capacities. On the enterprise side, hard drive vendors have been supplying different models for different applications, but all of them are quite suitable for 24x7 NAS usage. For example, the WD Re and Seagate Constellation ES are tuned for durability under heavy workloads, while the WD Se and Seagate Terascale units are targeted towards applications where scalability and capacity are important.
Usually, the enterprise segment is quite conservative when it comes to capacity, but datacenter / cloud computing requirements have resulted in capacity becoming a primary factor to ward off all-flash solutions. HGST, a Western Digital subsidiary, was the first vendor to bring a 6 TB hard drive to the market. The sealed Helium-filled HDDs could support up to seven disks (instead of the five usually possible in air-filled units), resulting in a bump up to 6 TB in the same height as traditional 3.5" drives. Seagate adopted a six-platter design for the Enterprise Capacity v4 6 TB version. Today, Western Digital launched the first NAS-specific 6 TB drive targeting SOHO / home consumers, the WD Red 6 TB. In expanding their Red portfolio, WD provides us an opportunity to see how the 6 TB version stacks up against other offerings targeting the NAS market.
The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating three different hard drives targeting the NAS market:
- Western Digital Red 6 TB [ WDC WD60EFRX-68MYMN0 ]
- Seagate Enterprise Capacity 3.5 HDD v4 6 TB [ ST6000NM0024-1HT17Z ]
- HGST Ultrastar He6 6 TB [ HUS726060ALA640 ]
Each of these drives target slightly different markets. While the WD Red is mainly for SOHO and home consumers, the Seagate Enterprise Capacity targets ruggedness for heavy workloads while the HGST Ultrastar aims for data centers and cloud storage applications with a balance of performance and power efficiency.
Testbed Setup and Testing Methodology
Unlike our previous evaluation of 4 TB drives, we managed to obtain enough samples of the new drives to test them in a proper NAS environment. As usual, we will start off with a feature set comparison of the three drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configured three drives in a RAID-5 volume and processed selected benchmarks from our standard NAS review methodology.
We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.
AnandTech DAS Testbed Configuration | |
Motherboard | Asus Z97-PRO Wi-Fi ac ATX |
CPU | Intel Core i7-4790 |
Memory |
Corsair Vengeance Pro CMY32GX3M4A2133C11 32 GB (4x 8GB) DDR3-2133 @ 11-11-11-27 |
OS Drive | Seagate 600 Pro 400 GB |
Optical Drive | Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support) |
Add-on Card | Asus Thunderbolt EX II |
Chassis | Corsair Air 540 |
PSU | Corsair AX760i 760 W |
OS | Windows 8.1 Pro |
Thanks to Asus and Corsair for the build components |
In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.
The NAS setup itself was subjected to benchmarking using our standard NAS testbed.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evolution 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
83 Comments
View All Comments
brettinator - Friday, March 18, 2016 - link
I realize this is years old, but I did indeed use raw i/o on a 10TB fried RAID 6 volume to recover copious amounts of source code.andychow - Monday, November 24, 2014 - link
@extide, you've just shown that you don't understand how it works. You're NEVER going to have checksum errors if your data is being corrupted by your RAM. That's why you need ECC RAM, so errors don't happen "up there".You might have tons of corrupted files, you just don't know it. 4 GB of RAM has a 96% percent chance of having a bit error in three days without ECC RAM.
alpha754293 - Monday, July 21, 2014 - link
Yeah....while the official docs say you "need" ECC, the truth is - you really don't. It's nice, and it'll help to mitigate like bit-flip errors and stuff like that, but I mean...by that point, you're already passing PBs of data through the array/zpool before it's even noticable. And part of that has to do with the fact that it does block-by-block checksumming, which means that given the nature of how people run their systems, it'll probably reduce your ERRs even further, but you might be talking like a third of what's already an INCREDIBLY small percentage.A system will NEVER complain if you have ECC RAM (and have ECC enabled, because my servers have ECC RAM, but I've always disabled ECC in the BIOS), but it isn't going to NOT startup if you have ECC RAM, but with ECC disabled.
And so far, I haven't seen ANY discernable evidence that suggests that ECC is an absolute must when running ZFS, and you can SAY that I am wrong, but you will also need to back that statement up with evidence/data.
AlmaFather - Monday, July 28, 2014 - link
Some information:http://forums.freenas.org/index.php?threads/ecc-vs...
Samus - Monday, July 21, 2014 - link
The problem with power saving "green" style drives is the APM is too aggressive. Even Seagate, who doesn't actively manufacture a "green" drive at a hardware level, uses firmware that sets aggressive APM values in many low end and external versions of their drives, including the Barracuda XT.This is a completely unacceptable practice because the drives are effectively self-destructing. Most consumer drives are rated at 250,000 load/unload cycles and I've racked up 90,000 cycles in a matter of MONTHS on drives with heavy IO (seeding torrents, SQL databases, exchange servers, etc)
HDPARM is a tool that you can send SMART commands to a drive and disable APM (by setting the value to 255) overriding the firmware value. At least until the next power cycle...
name99 - Tuesday, July 22, 2014 - link
I don't know if this is the ONLY problem.My most recent (USB3 Seagate 5GB) drive consistently exhibited a strange failure mode where it frequently seemed to disconnect from my Mac. Acting on a hunch I disabled the OSX Energy Saver "Put hard disks to sleep when possible" setting, and the problem went away. (And energy usage hasn't gone up because the Seagate drive puts itself to sleep anyway.)
Now you're welcome to read this as "Apple sux, obviously they screwed up" if you like. I'd disagree with that interpretation given that I've connected dozens of different disks from different vendors to different macs and have never seen this before. What I think is happening is Seagate is not handling a race condition well --- something like "Seagate starts to power down, half-way through it gets a command from OSX to power down, and it mishandles this command and puts itself into some sort of comatose mode that requires power cycling".
I appreciate that disk firmware is hard to write, and that power management is tough. Even so, it's hard not to get angry at what seems like pretty obvious incompetence in the code coupled to an obviously not very demanding test regime.
jay401 - Tuesday, July 22, 2014 - link
> Completely unsurprised here, I've had nothing but bad luck with any of those "intelligent power saving" drives that like to park their heads if you aren't constantly hammering them with I/O.I fixed that the day i bought mine with the wdidle utility. No more excessive head parking, no excessive wear. I've had 3 2TB Greens and 2 3TB Greens with no issues so far (thankfully). Currently running a pair of 4TB Reds, but have not seen any excessive head parking showing up in the SMART data with those.
chekk - Monday, July 21, 2014 - link
Yes, I just test all new drives thoroughly for a month or so before trusting them. My anecdotal evidence across about 50 drives is that they are either DOA, fail in the first month or last for years. But hey, YMMV.icrf - Monday, July 21, 2014 - link
My anecdotal experience is about the same, but I'd extend the early death window a few more months. I don't know that I've gone through 50 drives, but I've definitely seen a couple dozen, and that's the pattern. One year warranty is a bit short for comfort, but I don't know that I care much about 5 years over 3.Guspaz - Tuesday, July 22, 2014 - link
I've had a bunch of 2TB greens in a ZFS server (15 of them) for years and none of them have failed. I expected them to fail, and I designed the setup to tolerate two to four of them failing without data loss, but... nothing.