The Primer: PCI Express 1.0 vs. 2.0

A serial interface, PCI Express is organized into lanes. Each lane has an independent set of transmit and receive pins, and data can be sent in both directions simultaneously. And here’s where things get misleading. Bandwidth in a single direction for a single PCIe 1.0 lane (x1) is 250MB/s, but because you can send and receive 250MB/s at the same time Intel likes to state the bandwidth available to a PCIe 1.0 x1 slot as 500MB/s. While that is the total aggregate bandwidth available to a single slot, you can only reach that bandwidth figure if you’re reading and writing at the same time.

One of our first encounters with PCI Express was at IDF in 2002

PCI Express 2.0 doubles the bidirectional bandwidth per lane. Instead of 250MB/s in each direction per lane, you get 500MB/s.

Other than graphics, there haven’t been any high bandwidth consumers on the PCIe bus in desktops. Thus the distinction between PCIe 1.0 and 2.0 has never really mattered. Today, both USB 3.0 and 6Gbps SATA aim to change that. Both can easily saturate a PCIe 1.0 x1 connection.

Intel's X58 Chipset. The only PCIe 2.0 lanes come from the IOH.

This is a problem because all Intel chipsets have a combination of PCIe 1.0 and 2.0 slots. Intel’s X58 chipset for example has 36 PCIe 2.0 lanes off of the X58 IOH, plus an additional 6 PCIe 1.0 lanes off the ICH. AMD’s 7 and 8 series chipsets don’t have any PCIe 1.0 slots.

AMD's 890GX doesn't have any PCIe 1.0 lanes

No desktop chipset natively supports both 6Gbps SATA and USB 3.0. AMD’s 8-series brings native 6Gbps SATA support, but USB 3 still requires an external controller. On Intel chipsets, you need a separate controller for both 6Gbps SATA and USB 3.

These 3rd party controllers are all PCIe devices, just placed on the motherboard. NEC’s µPD720200 is exclusively used by all motherboard manufacturers for enabling USB 3.0 support. The µPD720200 has a PCIe 2.0 x1 interface and supports two USB 3.0 ports.

The USB 3 spec calls for transfer rates of up to 500MB/s. Connected to a PCIe 2.0 interface, you get 500MB/s up and down, more than enough bandwidth for the controller. However if you connect the controller to a PCIe 1.0 interface, you only get half that (and even less in practice). It’s not a problem today but eventually, with a fast enough USB 3 device, you’d run into a bottleneck.

The 6Gbps situation isn’t any better. Marvell’s 88SE91xx PCIe 2.0 controller is the only way to enable 6Gbps SATA on motherboards (other than 890GX boards) or add-in cards today.

The interface is only a single PCIe 2.0 lane. The 6Gbps SATA spec allows for up to 750MB/s of bandwidth, but the PCIe 2.0 x1 interface limits read/write speed to 500MB/s. Pair it with a PCIe 1.0 x1 interface and you’re down to 250MB/s (and much less in reality due to bus overhead).

Index The C300 and What About P5x/H5x?


View All Comments

  • iwodo - Thursday, March 25, 2010 - link

    HDD performance has never really been the centre of discussion. Since they are always slow anyway. But with SSD, it has finally show SATA controller makes a lot of different.

    So what can we expect in future SATA controller? Are there any more performance we can squeeze out.
  • KaarlisK - Thursday, March 25, 2010 - link

    Do the P55 boards allow plugging in a graphics card in one x16 slot, and an IO card in the other x16 slot?
    According to Intel chipset specs, only the server versions of the chipset should allow that.
  • CharonPDX - Thursday, March 25, 2010 - link

    You talk about combining four PCIe 1.0 lanes to get "PCIe 2.0-like performance".

    PCIe doesn't care what generation it is. It only cares about how much bandwidth.

    Four PCIe 1.0 lanes will provide DOUBLE the bandwidth of one PCIe 2.0 lane. (4x250=1000 each way, 1x500=500 each way.)

    The fact that ICH10 and the P/H55 PCHs have 6-8 PCIe 1.0 lanes is more than enough to dwarf the measly 2 PCIe 2.0 lanes the AMD chipset has. (6x250=1500 or 8x250=2000 are both greater than 2*500=1000.) Irregardless, all three chipsets only have 2 GB/s between those PCIe ports and the memory controller.

    Why Highpoint cheaped out and put a two-port SATA 6Gb/s controller on a one-lane PCIe card is beyond me. Even at PCIe 2.0, that's still woefully inadequate. That REALLY should be on a four-lane card. Nobody but an enthusiast is going to buy it right now, and more and more "mainstream" boards are coming with 4-lane PCIe slots.

    By the way, the 4-lane slot on the DX58SO is PCIe 2.0, per">

    The fact that you have dismal results on a "1.0 slot" has nothing to do with it being 1.0, and everything to do with available bandwidth. If you put the exact same chip on a PCIe 1.0 4-lane card, you would see identical performance (possibly better, if it's more than enough to saturate 500 MB/s) than your one-lane card in a PCIe 2.0 slot. (I would have liked to see performance numbers running that card on the AMD's PCIe 2.0 one-lane slot.)
  • Anand Lal Shimpi - Thursday, March 25, 2010 - link

    The problem is that all of the on-board controllers and the cheaper add-in cards are all PCIe 2.0 x1 cards.

    Intel's latest DX58SO BIOS lists what mode the PCIe slots are operating in and when I install the HighPoint card in the x4 it lists its operating mode as 2.5GT/s and not 5.0GT/s. The x16 slots are correctly listed as 5.0GT/s.

    Take care,
  • qwertymac93 - Thursday, March 25, 2010 - link

    while the sb850 has 2 pci-e 2.0 lanes, the 890gx northbridge has 16 for graphics cards, and another 6 lanes for anything else(thats 24 in total, btw). the southbridge is connected to the northbridge with something similar to 4 pci-e 2.0 lanes, thus 2GB/s (16 gigaBITS/s). i have no idea why you think the "measly" two lanes coming off the southbridge mean anything about its sata performance, nor do i understand why you think the 6 lanes coming off of intels h55(being fed by a slow dmi link) are somehow better.

    P.S. i don't think "irregardless" is a word, its sorta like a self contained double-negative. "ir"= not or without, "regard" care or worth, "less" not or without. "irregardless"= not without care or worth.
  • CharonPDX - Thursday, March 25, 2010 - link

    Both the SB850 and the Intel chipsets have 2 GB/s links between the NB and SB (or CPU and SB, in the P/H55.)

    And you are correct, I was not referring at all to the SB850's onboard SATA controller; solely to its PCIe slots. Six lanes of PCIe 1.0 has more available bandwidth than two lanes of PCIe 2.0. This comes in to play when using an add-in card.

    (Yes, I know "irregardless" isn't a real word, it's just fun to use.)
  • CharonPDX - Thursday, March 25, 2010 - link

    P.S. Go get a Highpoint RocketRAID 640. It has the exact same SATA 6Gb chip as the card you used, but on a x4 connector (and with four SATA ports instead of two, and with RAID. But if you're only running one drive, it should be identical.) Run it in the PCIe 1.0 x4 slot on the P55 board. Compare that to the x4 slot on the 890GX board. I bet you'll see *ZERO* difference when running just one drive.

    In fact, I bet on the 890GX board, you'll see the exact same performance on the RR640 in the x4 slot as on the Rocket 600 in the x1 slot.
  • oggy - Thursday, March 25, 2010 - link

    I would be fun to see some dual C300 action :) Reply
  • wiak - Friday, March 26, 2010 - link

    yes, on both AMD 6Gbps SB850, Marvell 6Bps on AMD and Intel ;) Reply
  • Ramon Zarat - Thursday, March 25, 2010 - link

    Unfortunately, testing with 1 drive give us only 1/3 on the picture.

    To REALLY saturate the SATA3/PCIe bus, 2 drives in stripped RAID 0 should have been used.

    To REALLY saturate everything (SATA3/USB3/PCIe)AT THE SAME TIME, an external SATA3 to USB3 SSD cradle transferring to/from 2 SSD SATA3 drives in stripped RAID 0 should have been used.

    The only thing needed to get a complete and definitive picture to settle this question once and for all would have been 2 more SATA3 SSDs and a cradle...

    Excellent review, but incomplete in my view.

Log in

Don't have an account? Sign up now