Hitachi Deskstar 7K1000: Terabyte Storage arrives on the Desktop
by Gary Key on March 19, 2007 8:00 AM EST- Posted in
- Storage
Hitachi Global Storage Technologies announced right before CES 2007 they would be shipping a new 1TB (1000.2GB) hard disk drive in Q1 of this year at an extremely competitive price of $399 or just about 40 cents per GB of storage. We fully expected Seagate to beat Hitachi in the race to single terabyte drive offerings based upon expectations after their 750GB drive shipped last June. However, it now appears the Seagate Barracuda 7200.11 1TB drive will not ship until Q2 of this year leaving Hitachi alone at the top of the storage capacity mountain for now.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
74 Comments
View All Comments
mikeg - Thursday, April 26, 2007 - link
Its been a over a month since the article came out and I still don't see any in the retail stores or a non OEM drive. Where can I get one?? Anyone see a retail box of these drives a a retailer? I want to get a coupleMike
jojo4u - Monday, March 26, 2007 - link
Hello Gary,the Hitachi datasheet refers to three idle modes using APM. The results with AAM enabled could suggest that APM is automatically engaged with AAM. So perhaups one should check the APM level with Hitachi's Feature Tool or the generic tools http://hdparm-win32.dyndns.org/hdparm/">hdparm or hddscan.
Gary Key - Friday, March 30, 2007 - link
We had a lengthy meeting with the Hitachi engineers this week to go over APM and AAM modes along with the firmware that is shipping on the Dell drives. I hope to have some answers next week as testing APM capabilities on a Dell based system resulted in a slightly different behavior than our test bench. I have completed the balance of testing with various AAM/NCQ on/off combinations and some additional benchmark tests. I am hoping to update the article next week. Also, I ran acoustic tests in a different manner and will have those results available. Until, then I did find out that sitting a drive on a foam brick outside of a system and taking measurements from the top will mask some of the drives acoustic results. The majority of noise emitted from this drive comes from the bottom, not the top. ;)ddarko - Monday, March 26, 2007 - link
"However, Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move."This sounds like an sneaky attempt by Hitachi to raise doubt about the safety of Seagate's forthcoming 1TB drive. Where is the data to support this rather bold statement that 250GB platters designs are not as capable as 200GB designs of meeting these completely unspecified "reliability rate targets"? What does that even mean? Can we infer that 150GB platter designs are even more reliable than 200GB designs? It's disappointing to see the review accept Hitachi's statement without question, going so far as to even applaud Hitachi for its approach without any evidence whatsoever to back it.
Lord Evermore - Thursday, March 22, 2007 - link
While I know memory density in general isn't increasing nearly as fast as hard drive size, 32MB cache seems pretty chintzy for a top-end product. I suppose 16MB on the 750GB drives is even worse.My first 528MB hard drive with a 512KB cache was a 1/1007 ratio (using binary cache size, and labelled drive size which would be around binary 512MB). Other drives still had as little as 128KB cache, so they could have been as little as a 1/4028 ratio, but better with smaller drives. I think anything larger than 512MB always had 512KB.
A 20GB drive with 2MB cache is 1/9536 ratio.
A 100GB drive with 2MB cache is 1/47683.
Then the jump to 8MB cache makes the ratio much better at 1/11920 for a 100GB drive (I'm ignoring the lower-cost models that had higher capacities, but still 2MB cache). Then it gets progressively worse as you get up to the 500GB size drives. Then we make another cache size jump, and the 160GB to 500GB models have a 16MB option, which is back to 1/9536 on a 160GB, to 1/29802 on a 500GB.
The trend here being that we stick with a particular cache size as drive size increases so the ratio gets worse and worse, then we make a cache size jump which improves the ratio and it gets worse again, then we make another cache size jump again.
Now we go to 750GB drives with 16MB cache. Now we are up to a 1/44703 ratio, only the 2nd worse ever, seems like time for another cache increase. Jumping to 32MB with a 100TB drive only makes it 1/29802. Not a very significant change despite doubling the cache again, since the drive size also increased, and it'll only get worse as they increase the drive size. Even 32MB on a 750GB drive is 1/22351, only slightly better than the 16MB/500GB flagship drives when they came out, and we don't even HAVE a 32MB/750GB drive.
A 512MB cache would be nice. That's still not the best ratio ever, it's still 1/1862, but that's a heck of a lot better than 1/30,000th. At the very least, they need to jump those cache chip densities a lot, or use more than one. Even a single 512MB density chip would be 64MB, still not great but better.
Per Hansson - Sunday, March 25, 2007 - link
Bigger caches would almost make it a necessity that you run the system on a UPS.Loosing 32mb of data that is yet to be written to the platters is allot, but 512mb?
And the UPS would not take into account OS crashes...
I'm not sure how much this would affect performance either, but a review of a SCSI drive with a SCSI controller with 2mb - 1gb of cache would answer that question well...
yehuda - Wednesday, March 21, 2007 - link
Do they plan to launch a single platter variant sometime in the near future?Gary Key - Wednesday, March 21, 2007 - link
They will be releasing a 750GB variant in May. Our initial reports have the single platter drives along with the 300~500GB models coming later in the summer. I am trying to get that confirmed now.DeathSniper - Tuesday, March 20, 2007 - link
Last page..."The Achilles heal of the Seagate 750GB drive..."I think it should be heel, not heal ;)
Spacecomber - Tuesday, March 20, 2007 - link
While this drive has enough in the way of other features to make it stand out from the crowd, I was a bit surprised to see that Hitachi hadn't upped the warranty to 5 years for this drive, which is what Seagate offers on most of their drives and WD offers on their raptors.