Hitachi Deskstar 7K1000: Terabyte Storage arrives on the Desktop
by Gary Key on March 19, 2007 8:00 AM EST- Posted in
- Storage
Hitachi Global Storage Technologies announced right before CES 2007 they would be shipping a new 1TB (1000.2GB) hard disk drive in Q1 of this year at an extremely competitive price of $399 or just about 40 cents per GB of storage. We fully expected Seagate to beat Hitachi in the race to single terabyte drive offerings based upon expectations after their 750GB drive shipped last June. However, it now appears the Seagate Barracuda 7200.11 1TB drive will not ship until Q2 of this year leaving Hitachi alone at the top of the storage capacity mountain for now.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
The Seagate delay appears to have come from their decision to move to 250GB per-platter capacities while Hitachi is launching their Deskstar 7K1000 with a 200GB per-platter capacity. From all indications, Seagate's drive will continue to have a 16 MB cache setup while Hitachi has wisely chosen to implement a 32 MB cache for their five platter design. Hitachi has informed us they have the capability to go to 250GB per-platter designs but launched at smaller capacities to ensure their reliability rate targets were met. Considering the absolute importance of data integrity we think this was a wise move.
What we did not expect was that this drive would be offered exclusively through Dell or its subsidiary Alienware in select XPS, Aurora, or Area 51 gaming desktop systems before general retail availability in the next two to three weeks. With the immediate availability of the Deskstar 7K1000 from Dell or Alienware also comes a new program that is being offered to its customers called StudioDell. StudioDell is a community website that allows general consumers and business customers to see tips and trends in technology, as well as submit their own videos showcasing how they are using technology in innovative ways. All video content submitted to StudioDell for the remainder of 2007 will be copied onto a 1TB Hitachi Deskstar 7K1000 hard drive and will be stored for 50 years on the Dell campus in Round Rock, TX. You can visit the StudioDell website for more details.
For over fifty years, the storage industry has been on a path where the longitudinal technology currently utilized would eventually become a limiting factor in drive capacities. Over the last decade the drive manufacturers have been doubling and at times quadrupling storage capacity at a dizzying rate in order to meet continuing demands from users. In fact, it took the industry almost 35 years to reach the 1GB level, another 14 years to reach 500GB, and now with perpendicular recording technology, less than two years to reach 1TB.
The standard method of increasing the amount of capacity in a drive is to either add more platters or increase the density of the data on each platter. Increasing the density of data that can be stored on a platter is the preferred design as it will allow for an overall increase in drive storage along with resulting performance and cost advantages by reducing the number of components. However, this solution requires significantly more effort from a research and development viewpoint that can lead to additional complexity and cost. While the storage manufacturers have been able to develop and implement some incredible technologies to achieve the capacities, cost, and drive performance we currently experience there is a limit to what can be achieved with longitudinal recording technology.
The next evolutionary step in the storage industry for solving today's current issues is the utilization of perpendicular recording technology. This technology has been researched and discussed for a number of years by the various drive manufacturers while it has its roots in the late 19th century work of Danish scientist Valdemar Poulsen, who is generally considered to be the first person to magnetically record sound using perpendicular recording technology.
What is Perpendicular Recording Technology? Simply put, during perpendicular recording the magnetization of the disc stands on end, perpendicular to the plane of the disc, instead of lying in the disc's plane as it does in current longitudinal recording. The data bits are then represented as regions of upward or downward directed magnetization points, whereas in longitudinal recording, the data bit magnetization lies in the plane of the disc and switches between pointing in the same and then opposite directions of the head movement. The media is written to a soft magnetic under-layer that functions as part of the write field return path and basically generates an image of the recording head that doubles the available recording field resulting in a higher recording density compared to longitudinal recording.
In order to increase areal densities and provide greater storage capacity in longitudinal recording, the data bits must be arranged and shrunk in a very tight pattern on the disc media. However, if the data bit becomes too small, the magnetic energy holding the bit in place can become so small that thermal energy can cause it to demagnetize resulting in a condition known as superparamagnetism.
To avoid superparamagnetism, engineers have been increasing the coercivity, the field size required to write a bit, of the disc media. These fields are limited by the magnetic materials making up the write head that will soon effectively limit drive sizes utilizing longitudinal recording. Although additional capacities are still achievable, the drive industry is in the process of moving to perpendicular recording technology as longitudinal recording has basically hit the proverbial brick wall after being utilized for 50 plus years.
Perpendicular recording will eventually enable areal densities of up to 500 Gbpsi (Gigabits per square inch) with current technology as compared to 110 Gbpsi rates in today's longitudinal recording designs. This results in an almost five fold increase in storage capacities with a typical 3.5-inch desktop drive being able to store 2TB of information in the near future. If all of this sounds a little daunting, then Hitachi developed a simple explanation of PMR in their Get Perpendicular presentation, but be forewarned as the jingle might get stuck in your head for the rest of the day. For those who are in the need for additional technical details then we suggest a good cup of coffee and a visit to the white papers section over at Hitachi.
Let's see how the newest Deskstar 7K1000 performs against other SATA based drives.
74 Comments
View All Comments
Gary Key - Monday, March 19, 2007 - link
It has worked well for us to date. We also took readings with several other programs and a thermal probe. All readings were similar so we trust it at this time. I understand your concern as the sensors have not always been accurate.mkruer - Monday, March 19, 2007 - link
I hate this decimal Byte rating they use. They say the capacity is 1 TeraByte meaning 1,000,000,000,000 Bytes, this actually translates into ~930GB or .93TB that the OS will see using the more commonly used (base 2) metric. This is the metric that people assume you are talking about. When will the drive manufactures get with the picture and list the standard Byte capacity?Spoelie - Tuesday, March 20, 2007 - link
I don't think it matters all that much, once you heard it you know it. There's not even a competitive marketing advantage or any scamming going on since ALL the drive manufacturers use it and in marketing material there's always a note somewhere explaining 1GB = blablabla bytes. So 160GB on one drive = 160GB on another drive. That it's not the formatted capacity has been made clear for years now, so I think most people who it matters for know.Zoomer - Wednesday, March 21, 2007 - link
IBM used to not do this. Their advertised 120GB drive was actually 123.xxGB, where the GB referred to the decimal giga. This made useable capacity a little over 120GB. :)JarredWalton - Monday, March 19, 2007 - link
See above, as well as http://en.wikipedia.org/wiki/SI_prefix">SI prefix overview and http://en.wikipedia.org/wiki/Binary_prefix">binary prefix overview for details. It's telling that this came into being in 1998, at which time there was a class action lawsuit occurring I believe.Of course, you can blame the computer industry for just "approximating" way back when KB and MB were first introduced to be 1024 and 1048576 bytes. It probably would have been best if they had created new prefixes rather than cloning the SI prefixes and altering their meaning.
It's all academic at this point, and we just try to present the actual result for people so that they understand what is truly meant (i.e. the "Formatted Capacity").
Olaf van der Spek - Monday, March 19, 2007 - link
The screenshot shows only 1 x 10 ^ 12 bytes. :(
And I'm wondering, do you know about any plans for 2.5" desktop drives (meaning, not more expensive than cheapest 3.5" drives and better access time)?
crimson117 - Monday, March 19, 2007 - link
How many bytes does this drive actually hold? Is it 1,000,000,000,000 bytes or 1,099,511,627,776 bytes?It's interesting... it used to not seem like a huge difference, but now that we're approaching such high capacities, it's almost a 100 GB difference - more than most laptop hard disks!
crimson117 - Monday, March 19, 2007 - link
I should learn to read: Operating System Stated Capacity: 931.5 GBJarredWalton - Monday, March 19, 2007 - link
Of course, the standard people decided (AFTER the fact) that we should now use GiB and MiB and TiB for multiples of 1024 (2^10). Most of us grew up thinking 1KB = 1024B, 1MB = 1024KB, etc. I would say the redefinition was in a large part to prevent future class action lawsuits (i.e. I could see storage companies lobbying SI to create a "new" definition). Windows of course continues to use the older standard.Long story short, multiples of 1000 are used for referring to bandwidth and - according to the storage sector - storage capacity. Multiples of 1024 are used for memory capacity and - according to most software companies - storage capacity. SI sides with the storage people on the use of mibibytes, gibibytes, etc.
mino - Tuesday, March 20, 2007 - link
Ehm, ehm.GB was ALWAYS spelled Giga-Byte and Giga- with short "G" is a standard prefix for 10^9 since the 19th century(maybe longer).
The one who screwed up were the software guys whoe just ignored the fact 1024!=1000 and used the same prefix with different meaning.
SI for long ignored this stupidity.
Lately SI guys realized software guys are too careless to accept the reality that 1024 really does not equal 1000.
It is far better to have some standard way to define 1024-multiples and have many people use old wrong prefixes than to have no such definition at all.
I remember clearly how confused I was back in my 8th grade on Informatics class when teacher tried(and failed back then) to explain why everywhere SI prefixes mean 10^x but in computers they mean 2^10 aka 1024.
IT took me some 4 years until I was comfortable with power-of-something nubers enough so that it did not matter whether one said 512 or 2^9 to me.
This prefix issue is a mess SI did not create nor caused. They are just trying to clean it up in the single possible way.