Testing is nearly complete on the last Corsair SSD that came my way, but this morning UPS dropped off another surprise: the Corsair Force SSD. Based on a derivative of the controller in the OCZ Vertex LE I reviewed earlier this year, the Force uses the mainstream version of SandForce's technology. Here's how it breaks down. Last year's Vertex 2 Pro used a SF-1500 controller, the Vertex LE uses something in between a SF-1500 and SF-1200 (closer to the SF-1500 in performance) while the Corsair Force uses a SF-1200.

The SF-1200 has all of the goodness of the SF-1500, just without some of the more enterprise-y features. I haven't been able to get a straight answer from anyone as to exactly what you give up by going to the SF-1200 but you do gain a much more affordable price. The Vertex LE is only low in price because it is using a limited run of early controllers from SF, presumably so SandForce can raise capital. The SF-1200 based SSDs should be price competitive with current Indilinx offerings.

You'll notice that like the Vertex LE there's no supercap on the Force's PCB. There's also no external DRAM cache thanks to a large amount of on-die cache and SandForce's real time data compression/deduplication technology. As you may remember from my Vertex 2 Pro and Vertex LE reviews, SandForce achieves higher performance by simply reducing the amount of data it has to write to NAND (similar to lossless compression or data deduplication). 

I've got the Force on my SSD testbench now and I should have the first results by the end of the day today. This one is exciting as it could give us a preview of what the performance mainstream SSD marketplace will look like for the rest of 2010.

More pics of the drive in the Gallery!

POST A COMMENT

41 Comments

View All Comments

  • SandmanWN - Tuesday, April 13, 2010 - link

    Not following you.
    28 out of 100 is 28%
    28 out of 93 is 30%
    Only way you get 37% is if you count the 7 lost from formatting as spare area, but...
    Reply
  • JarredWalton - Tuesday, April 13, 2010 - link

    To clarify, 128 is 37% more capacity than 93. Reply
  • SandmanWN - Tuesday, April 13, 2010 - link

    You lost 7 for formatting which isn't spare so its still 28 spare/93 usable thus 30% Reply
  • Voo - Tuesday, April 13, 2010 - link

    Ahm guys sorry to destroy your enthusiasm to correct a AT writer, but Flash blocks are usually sizes of 2, which means we're talking about 128 GiB like Jarred said correctly. And since they sell them as 100GB (notice the missing i) that means the drives have ~93.1 GiB (notice the i - yes I know SI vs. size of two is annoying). So we're either computing:
    128/93.13 - 1 = 37.4%
    or
    137.4/100 - 1 = 37.4%

    Both the same and both correct =)
    Reply
  • JarredWalton - Tuesday, April 13, 2010 - link

    He's actually correct, though. You're losing 37% of the capacity, but you're using 35GiB of capacity as spare area, hence using 27.3%. The joys of percentages.... :-) Reply
  • Voo - Wednesday, April 14, 2010 - link

    Ah ok so just a dispute about what value we want to use. And yeah it's probably more useful to use percentage used as spare then the other way round. Though you've still got to be careful with GB/GiB. Reply
  • SandmanWN - Tuesday, April 13, 2010 - link

    The point trying to be made is not pulling a gotcha on the writer, it's to determine the optimal spare setup for these drives. Just as the amount of cache can lead to better performance for HDD's, the same needs to be known for spare area and SSD's.

    Will changing the amount of spare size boost performance enough to justify the loss of usable space? Different drive manufacturers are bound to play with these ratios. Which will come out ahead? At what point does the rule of diminishing returns push one to go for usable space versus higher spare area? Can the spare area be adjusted in the drive firmware?

    These numbers will become important if spare areas start showing up for more drives. Just wanted to make sure the numbers were on the up and up for future reference.
    Reply
  • JarredWalton - Wednesday, April 14, 2010 - link

    Actually, here's another thought to consider:

    In terms of pure area used, yes, they set aside 27.3% of the available capacity. However, with their DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. I wonder if you're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining? Of course, even at 0% compression you'd still have at least 35GiB of spare, but with a reasonable 25% compression average you might have as much as ~58GiB of spare area. Hmmm.....
    Reply
  • Dark Legion - Tuesday, April 13, 2010 - link

    So they're using ~27% of the total capacity as spare area. Reply
  • Dark Legion - Tuesday, April 13, 2010 - link

    Sorry, forgot the 7 GB that's not spare...28GB spare/128GB total = 22%. Reply

Log in

Don't have an account? Sign up now