The Primer: PCI Express 1.0 vs. 2.0

A serial interface, PCI Express is organized into lanes. Each lane has an independent set of transmit and receive pins, and data can be sent in both directions simultaneously. And here’s where things get misleading. Bandwidth in a single direction for a single PCIe 1.0 lane (x1) is 250MB/s, but because you can send and receive 250MB/s at the same time Intel likes to state the bandwidth available to a PCIe 1.0 x1 slot as 500MB/s. While that is the total aggregate bandwidth available to a single slot, you can only reach that bandwidth figure if you’re reading and writing at the same time.


One of our first encounters with PCI Express was at IDF in 2002

PCI Express 2.0 doubles the bidirectional bandwidth per lane. Instead of 250MB/s in each direction per lane, you get 500MB/s.

Other than graphics, there haven’t been any high bandwidth consumers on the PCIe bus in desktops. Thus the distinction between PCIe 1.0 and 2.0 has never really mattered. Today, both USB 3.0 and 6Gbps SATA aim to change that. Both can easily saturate a PCIe 1.0 x1 connection.


Intel's X58 Chipset. The only PCIe 2.0 lanes come from the IOH.

This is a problem because all Intel chipsets have a combination of PCIe 1.0 and 2.0 slots. Intel’s X58 chipset for example has 36 PCIe 2.0 lanes off of the X58 IOH, plus an additional 6 PCIe 1.0 lanes off the ICH. AMD’s 7 and 8 series chipsets don’t have any PCIe 1.0 slots.


AMD's 890GX doesn't have any PCIe 1.0 lanes

No desktop chipset natively supports both 6Gbps SATA and USB 3.0. AMD’s 8-series brings native 6Gbps SATA support, but USB 3 still requires an external controller. On Intel chipsets, you need a separate controller for both 6Gbps SATA and USB 3.

These 3rd party controllers are all PCIe devices, just placed on the motherboard. NEC’s µPD720200 is exclusively used by all motherboard manufacturers for enabling USB 3.0 support. The µPD720200 has a PCIe 2.0 x1 interface and supports two USB 3.0 ports.

The USB 3 spec calls for transfer rates of up to 500MB/s. Connected to a PCIe 2.0 interface, you get 500MB/s up and down, more than enough bandwidth for the controller. However if you connect the controller to a PCIe 1.0 interface, you only get half that (and even less in practice). It’s not a problem today but eventually, with a fast enough USB 3 device, you’d run into a bottleneck.

The 6Gbps situation isn’t any better. Marvell’s 88SE91xx PCIe 2.0 controller is the only way to enable 6Gbps SATA on motherboards (other than 890GX boards) or add-in cards today.

The interface is only a single PCIe 2.0 lane. The 6Gbps SATA spec allows for up to 750MB/s of bandwidth, but the PCIe 2.0 x1 interface limits read/write speed to 500MB/s. Pair it with a PCIe 1.0 x1 interface and you’re down to 250MB/s (and much less in reality due to bus overhead).

Index The C300 and What About P5x/H5x?
POST A COMMENT

57 Comments

View All Comments

  • Shadowmaster625 - Tuesday, March 30, 2010 - link

    It sounds like AMD made a conscious decision to focus on maximum random write performance, even if it required sacrificing all other key performance metrics. I hope that is the case, because it is pretty sad that their 6 gbps controller is generally outperformed by a 3 gbps controller! Reply
  • astewart999 - Tuesday, March 30, 2010 - link

    When talking performance, why are they not mentioning RAID 0. I suspect SATA3 is not capable? Reply
  • astewart999 - Tuesday, March 30, 2010 - link

    Ignore my ignorance, I read the article then posted. Should have read the posts and ignored the article! Reply
  • nexox - Monday, April 5, 2010 - link

    Just get a SASII (6Gbit) PCI-E HBA (LSI makes one, probably others) - plenty of speed, they generally run on a PCI-E 8x slot, and you can run SATA drives in them just fine. Plus they tend not to cost too much more than the consumer-level SATA adaptors, which are apparently questionable performance-wise. They'd at least make a good baseline for comparison. Reply
  • supremelaw - Saturday, April 17, 2010 - link

    RS2BL040 and RS2BL080 are now at Newegg:

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    http://www.newegg.com/Product/Product.aspx?Item=N8...
    http://www.intel.com/products/server/raid-controll...

    Before buying, confirm whether or not TRIM will work with SSDs in RAID modes.

    http://www.pcper.com/comments.php?nid=8538

    *** UPDATE ***

    The unconfirmed bit has been confirmed as unconfirmed from Intel:

    “Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0,1,5,10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release”

    Looks like we'll have to wait a little longer for TRIM through RAID, but there *are* other SSD-specific improvements in this new driver.

    *** END UPDATE ***

    MRFS
    Reply
  • chrcoluk - Saturday, June 19, 2010 - link

    Ok my thoughts.

    1 - you written of pcie v1 however failed to notice or mention that the plx chip uses pci-e 1x lanes from the p55 chipset so clearly pci-e 1.0 can supply the bandwidth if utilised properly, the plx chip transfers 4 1.0 lanes into 2 virtual 2.0 lanes for the sata6g and usb3.
    2 - some p55 boards, mine noticebly have a pci 2.0 slot fed of the p55 chipset @ x4 speed. Seems reviewers have got something wrong or are they claiming asus have it wrong? Even if we assume its actually pci 1.0 x4 that is still enough bandwidth to feed a sata 6g controller. Indeed the onboard plx which you praised sacrifices this x4 pci-e slot and uses those 4 lanes to feed itself. My thoery is the U3S6 card asus sell will perform the same as the onboard plx in a x4 slot but no reviewer has tested this properly.
    3 - whats the reason you did not test both gigabytes onboard and the lower asus onboard which borrow bandwidth from the primary pci-e x16 lanes, I am looking for tests of those in both turbo/levelup and normal mode.
    Reply
  • gimespace - Tuesday, August 8, 2017 - link

    Try enabling DirectGMA with maximum GPU Aperture in Amd catalyst control center. It does not only make the graphics card faster but allowed me to get up to the maximum 560mb/s read speed for my ssd! Reply

Log in

Don't have an account? Sign up now