Hitachi Deskstar 7K1000: Terabyte Storage arrives on the Desktop
by Gary Key on March 19, 2007 8:00 AM EST- Posted in
- Storage
Hitachi Global Storage Technologies announced right before CES 2007 they would be shipping a new 1TB (1000.2GB) hard disk drive in Q1 of this year at an extremely competitive price of $399 or just about 40 cents per GB of storage. We fully expected Seagate to beat Hitachi in the race to single terabyte drive offerings based upon expectations after their 750GB drive shipped last June. However, it now appears the Seagate Barracuda 7200.11 1TB drive will not ship until Q2 of this year leaving Hitachi alone at the top of the storage capacity mountain for now.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
74 Comments
View All Comments
goldfish2 - Tuesday, March 20, 2007 - link
just noticed a problem you may wish to address with your charts, hope this hasn't already been mentioned.Take a look at the chart 'video application timing - time to transcode DVD'
Your times are in Minutes/seconds, it seems you're chart application has interpreted the numbers as decimals, and made the bar lengths on this basis. Take a look at the bar for WD5000YS 500GB. It says 4.59; I assume this means 4 minutes 59 seconds, making the WD740GD 2 seconds slower at 5 minutes 1 second. But the bar lengths are scaled for decimal, so that the bar on the WD740GD is much longer. You'll have to see if you can get your graph package to think in minutes:seconds, or have the bar lengths entered in decimal (i.e. 4:30 seconds becomes 4.5 minutes) and put a label on in minutes for readability.
Thanks for the review though.
Gary Key - Tuesday, March 20, 2007 - link
We have a short blurb under the Application Performance section -"Our application benchmarks are designed to show application performance results with times being reported in minutes / seconds or seconds only, with lower scores being better. Our graph engine does not allow for a time format such a 1:05 (one minute, five seconds) so this time value will be represented as 1.05."
We know this is an issue and hopefully we can address it in our next engine update (coming soon from what I understand). I had used percentage values in a previous article that was also confusing to some degree. Thanks for the comments and they have been passed on to our web team. ;)
PrinceGaz - Tuesday, March 20, 2007 - link
The simplest and most logical solution is just to enter the time in seconds, rather than minutes and seconds; even if graphed correctly, comparing values composed of two units (minutes:seconds) is difficult compared to a single unit (seconds).If two results were 6:47 and 7:04 for instance, the difference betweem them is much clearer if you say 407 and 424 seconds. By giving the value in seconds only, you can see at a glance that there is a 17 second difference, which translates to just over 4% (17 divided by 407/100, or 17 divided by about 4.1).
Doing the same mental calculation with 6:47 and 7:04 first involves working out the difference with the extra step of dealing with 60 seconds to a minute. Then you have a difference of 17 seconds out of a little under 7 minutes, which isn't very helpful until you convert the 7 minutes to seconds, as it should have been originally.
That's my opinion anyway.
JarredWalton - Tuesday, March 20, 2007 - link
Hi Gary. I told you so! Damned if you do, damned if you don't. ;) (The rest of you can just ignore me.)PrinceGaz - Tuesday, March 20, 2007 - link
How can you say only two years?
The 14 years you say it took to increase from 1GB to 500GB represents a doubling of capacity nine times, or roughly 1.56 years (19 months) for the capacity to double. That means that the two years (actually 20 months as Hitachi released a 500GB drive in Jul 2005) it took to double again, from 500GB to 1TB is actually marginally longer than average.
It would be more accurate to say that the trend of capacities doubling roughly every 18 months is continuing.
patentman - Tuesday, March 20, 2007 - link
The two year remark is two years from the first commercial perpendicular recording drive. Perpendicular recording has been in the works for a long time. In fact, when I used to examine patent applications for a living, there was patent literature related to perpendicular recording all the way back in 1990-1991, albeit for relatively simple aspects of the device.Gary Key - Tuesday, March 20, 2007 - link
The averaging of the time periods does work out to a doubling of capacities every 18~20 months but the last doubling took about 26 months to go from 250GB to 500GB.
mino - Wednesday, March 21, 2007 - link
Yes, but first 250GB drives were 4-platter 5400rpm ones(Maxtor?)...First 500GB were 5-platter 7200rpm ones.
IMO there are little dicrepancies in the tren dcaused bu the worry of many-platter drives after 75GXP. Aftre a few years Hitachi came back with 7K400 and the curve just returned to values it lost before...
scott967 - Tuesday, March 20, 2007 - link
On these big drives is NTFS performance an issue at all?scott s.
.
AbRASiON - Monday, March 19, 2007 - link
Too slow, too much money, too little space.I've owned 3 and sold them.
When are we going to see a 15krpm Savvio 2.5" review?
When will we see a 180gb per platter 32mb 10,000rpm new series raptor?
Maybe WD should also make a 15krpm 2.5" - 32mb model
These incrimental speed upgrades on hard disks are terrible :( need more, much much more.