Not to sound like a broken record, but with the exception of OCZ's Vertex LE, not much has changed in the SSD market over the past couple of years. Intel still seems like the safest bet, and these days they're even offering a pretty compelling value.

The 80GB X25-M G2 is finally selling for reasonable prices and earlier this month Intel launched its first value SSD: the X25-V. Priced at $125, the X25-V gives you much of the performance of the X25-M but at a lower cost and capacity point. It's a great way to safely transition to an SSD.

Intel's X25-V uses the same controller as the X25-M G2, but with half the NAND and thus half the channels

For months now you all have been asking me to tackle the topic of RAIDing SSDs. I've been cautious about doing so for a number of reasons:

1) There is currently no way to pass the TRIM instruction to a drive that is a member of a RAID array. Intel's latest RAID drivers allow you to TRIM non-member RAID disks, but not an SSD in a RAID array.

2) Giving up TRIM support means that you need a fairly resilient SSD, one whose performance will not degrade tremendously over time. On the bright side, with the exception of the newer SandForce controllers, I'm not sure we've seen a controller as resilient as Intel's.

A couple of weeks ago I published some early results of Intel's X25-V SSD. But I was holding out on you, I actually had two:

Using the same Intel X58 testbed I've been using for all of my SSD tests, I created a 74.5GB RAID-0 array out of the two drives and quickly ran them through almost all of our benchmarks. At a total cost of $250, a pair of X25-Vs will set you back more than a single 80GB X25-M and you do give up TRIM, but is the performance worth it?

The Test

AnandTech SSD Testbed
Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)
Intel DX58SO
Intel X58 + Marvell SATA 6Gbps PCIe
Chipset Drivers
Intel + Intel IMSM 8.9
Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card
eVGA GeForce GTX 285
Video Drivers
NVIDIA ForceWare 190.38 64-bit
Desktop Resolution
1920 x 1200
Windows 7 x64
Sequential Read/Write Speed


View All Comments

  • Makaveli - Tuesday, March 30, 2010 - link

    Why are so many of you having difficultly understanding this. YOU DO NOT GET TRIM SUPPORT WITH THE NEW INTEL DRIVERS IF YOU HAVE A RAID ARRAY BUILT OF JUST SSDs!

    Where every you guys are reading that stop its wrong!

  • vol7ron - Tuesday, March 30, 2010 - link

    Finally a RAID! Thank you, thank you, thank you. Just a few days too late before I bought the 80GB, but still this makes your review a little more meaningful - it is essentially the equivalent of showing overclocks for CPUs and more meaningful, since HDs are a bottleneck.

    RAID-0s see a greater impact with 3 or more HDs. I think the impact is exponential to the number of drives in the array, not just seemingly double. I know TRIM is not supported, but if you could get one more 40GB drive and also include the impact, that would be nice - I would consider anything more than 3 drives in the array as purely academic; 3 or less drives is a practical (and realistic) system setup.

    Notes to others:
    I saw the $75 discount on Newegg for 80GB X25MG2 (@ $225) and decided to grab it, since one of my 74GB Raptors finally failed in RAID. This discount (or price drop) is most likely due to the $125 40GB version. I also picked up Win7 Ultimate x64, to give it a try.
  • cliffa3 - Tuesday, March 30, 2010 - link


    On an install of Win7, I'm guessing a good bit of random writes occur.

    How much longer would you stave off the performance penalty due to having no TRIM with RAID if you took an image of the drive after installation, secure erased, and restored the image?

    Please correct me if I'm wrong in assuming restoring an image would be entirely sequential.

    I would probably image it anyway, but just trying to get a guess on what you think the impact would be in the above scenario to see if I should immediately secure erase and restore.

    I also would be interested in how much improvement you get by adding another drive to the array in RAID it linear?
  • GullLars - Tuesday, March 30, 2010 - link

    Some of the powerusers i know have used the method of secure erase + image to restore performance if/when it degrades. Mostly they do it after heavy benchmarking or once every few months on their workstations (WMvare, databases and the like).

    RAID scales linearly as long as the controller can keep up. This is the RAW performance numbers, the real life impact is not linear, and will have diminishing returns due to storage performance being divided in two major categories: Throughput and accesstime. Throughput scales linearly, accesstime stays unchanged. Though average accesstime for larger blocks and under heavy load takes less of a hit in RAIDs.

    Intels southbridges scale to roughly 600-650MB/s, and i've seen 400+ MB/s done at 4KB random.

    As for random read scaling in RAID, you have the formula IOPS = {average accesstime} * {Queue Depth}
    Average accesstime has a more gentle slope in RAID as the Queue Depth scales the more units you put in the RAID, but at low QD (1-4) there is little to gain for blocks smaller than stripe size. No matter how may SSDs you add in the RAID, you will never get scaling more than QD * IOPS @ QD 1.
  • ThePooBurner - Tuesday, March 30, 2010 - link

    Check out this video of 24 SSDs in a raid 0 array. Mind blowing.
  • GullLars - Tuesday, March 30, 2010 - link

    Actually, that RAID has BAD performance compared to the number of SSDs.
    You are blinded by the sequential read numbers. Those Samsung SSDs have horrible IOPS performance, and the cost of the setup in the video compared to performance is just outright awfull.

    You can get the same read bandwidth with 12 x25-V's, and at the same time 10X the IOPS performance.
    Or if you choose to go for C300, 8 C300 will beat that setup in every test and performance meteric you can think of.

    Here is a youtube video of the performance of a Kingston V 40GB launching 50 apps for you to compare to the Samsung setup:

    I will also point out my 2 Mtron 7025 SSD that were produced in dec 2007 can open the entire MS office 2007 suite in 1 second, from SB650 and prefetch/superfetch deactivated.
  • Slash3 - Tuesday, March 30, 2010 - link

    Speaking of which, is there a "report abuse" button for comments on this new site design system? I didn't notice one, just in fumbling around a bit. Reply
  • waynethepain - Tuesday, March 30, 2010 - link

    Would defragging the SSDs mitigate some of the build up of garbage? Reply
  • 7Enigma - Tuesday, March 30, 2010 - link

    You do not defrag an SSD. Reply
  • GullLars - Tuesday, March 30, 2010 - link

    You don't need to defrag a SSD, but you can. This does not affect the physical placement of the files, but will defragment their LBA fragmentation in the file tables. Since most SSDs can reach full bandwidth (or close to) at 32-64KB random read, you need a seriously fragmented system before you will notice anything. There are almost no files that will get fragmented into that small pieces, and even if you get 50 files each in 1000 fragments of 4KB, the SSD will read each one when they are needed in a fraction of a second.

    It doesn't hurt to defrag if you notice a few files in hundreds or thousands of fragments, the lifespan of the SSD will be unaffected by one defrag a week, but it will cause spikes of random writes, wich may cause a _temporary_ performance degrading if you don't have TRIM.

Log in

Don't have an account? Sign up now