Kaveri and Counting Cores

With the move towards highly integrated SoCs we've seen a variety of approaches to core counts. Apple, Intel and Qualcomm still count CPU cores when advertising an SoC. For Apple and Qualcomm that's partially because neither company is particularly fond of disclosing the configuration of their GPUs. More recently, NVIDIA took the somewhat insane stance of counting GPU CUDA cores on its Tegra K1 SoC. Motorola on the other hand opted for the bizarre choice of aggregating CPU, GPU and off-die companion processors with the X8 platform in its Moto X smartphone. Eventually we will have to find a way to characterize these highly integrated SoCs, particularly when the majority of applications actually depend on/leverage both CPU and GPU cores.

AMD finds itself in a unique position with Kaveri where it has a truly unified CPU/GPU architecture and needs to establish a new nomenclature for use in the future. With 47% of the Kaveri die dedicated for GPU use, and an architecture that treats both CPU and GPU as equals, I can understand AMD's desire to talk about the number of total cores on the APU.

AMD settled on the term "Compute Core", which can refer to either an x86 (or maybe eventually ARM) CPU core or a GCN compute unit. The breakdown is as follows:

  • Each thread on a CPU is a Compute Core
  • Each Compute Unit on the IGP is a Compute Core
  • Total Compute Cores = CPU Compute Cores + IGP Compute Cores

This means that the high end SKU, the A10-7850K will have a total of 12 compute cores: four from the CPU (two Steamroller modules supporting four threads) and eight from the IGP (due to eight compute units from the R7 graphics).

There are some qualifications to be made on this front. Technically, AMD is correct – each compute unit in the IGP and each thread on the CPU can run separate code. The Hawaii GCN architecture can spawn as many kernels as compute units, whereas a couple of generations ago we were restricted to one compute kernel on the GPU at once (merely with blocks of work being split across the CUs). However, clearly these 12 compute units are not equivalent: a programmer will still have to write code for the CPU and GPU specifically in order to use all the processing power available.

Whenever AMD (or partners) are to promote the new APUs, AMD tells us clearly that two sets of numbers should be quoted in reference to the Compute Cores – the total, and the breakdown of CPU/GPU on the APU. Thus this would mean that the A10-7850K APU would be marketed at a “12 Compute Core” device, with “(4 CPU + 8 GPU)” following immediately after. I applaud AMD's decision to not obfuscate the internal configuration of its APUs. This approach seems to be the most sensible if it wants to tout the total processing power of the APU as well as tell those users who understand a bit more what the actual configuration of the SoC is. The biggest issue is how to address the users who automatically assume that more cores == better. The root of this problem is very similar to the old PR-rating debates of the Athlon XP. Explaining to end users the intracacies of CPU/GPU programming is really no different than explaining why IPC * frequency matters more than absolute frequency.

When a programmer obtains an APU, the OpenCL profiler should locate the eight compute units of the GPU and display that to the user for offloading compute purposes; at the same time it is up to the programmer to leverage the threads as appropriately as possible, even with AMD’s 3rd generation Bulldozer modules implementing a dual-INT + single-FP solution.

At launch, AMD will offer the following configurations:

  • A10-7850K: 12 Compute Cores (4 CPU + 8 GPU)
  • A10-7700K: 10 Compute Cores (4 CPU + 6 GPU)
  • A8-7600: 10 Compute Cores (4 CPU + 6 GPU)

The problem with distilling the processing power of the APU into a number of compute cores is primarily on the CPU side. AMD will be keeping the GPU at approximately the same clock (720 MHz for these three) but the CPU frequency will differ greatly, especially with the A8-7600 which has configurable TDP and when in 45W mode will operate 300-400 MHz less.

Steamroller Architecture Redux

We've already documented much of what Steamroller brings to the table based on AMD's Hot Chips presentation last year, but I'll offer a quick recap here.

Steamroller brings forward AMD's Bulldozer architecture, largely unchanged. We're still talking about a dual-core module featuring two independent integer execution cores with a single shared floating point execution core capable of executing instructions from two threads in parallel. A single module still appears as two cores/threads to the OS.

In Bulldozer and Piledriver, each integer core had its own independent scheduler but the two cores shared a single fetch and decode unit. Instructions would come in and decodeded operations would be fed to each integer pipe on alternating clock cycles. In Steamroller the decode hardware is duplicated in each module, so now each integer core gets its own decode unit. The two decode units are shared by the one FP unit.

L1 instruction caches increase in size from 64KB to 96KB per module, which AMD claims reduces misses by up to 30%. There's an updated branch predictor which is responsible for reducing mispredicted branches by up to 20%. Both integer and FP register files grow in size as well as an increase in the size of the scheduling window, a combination of which increases the number of dispatches per thread by up to 25%.

There are huge improvements on the store side. Steamroller can now issue up to 2 stores at the same time compared to 1 in Bulldozer/Piledriver. The load/store queue sizes go up by around 20% as well. It's remarkable just how much low hanging fruit there was in the Bulldozer design.

GCN in an APU

The integrated graphics solution on Trinity/Richland launched with a Cayman-derived VLIW4 architecture, which unfortunately came just shortly after desktop side of the equation finished moving from VLIW5/VLIW4 to GCN. Having a product stack with largely different GPU architectures doesn't help anyone, particularly on the developer front. Looking forward, sticking with GCN was the appropriate thing to do, as now Kaveri is using GCN, the same architecture found in AMD’s high end R9-290X GPU, based on the Hawaii platform.

This enabled AMD to add in all the features they currently have on Hawaii with little to no effort – the TrueAudio DSP, the upgraded Video Coding Engine and Unified Video Decoder are such examples. Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community.

Kaveri: Aiming for 1080p30 and Compute Llano, Trinity and Kaveri Die: Compared
Comments Locked

380 Comments

View All Comments

  • LarsBars - Tuesday, January 14, 2014 - link

    Thanks Ian / Rahul for the article, and thanks especially for having a page on the FX / server situation. I like to follow AMD news, and I trust AnandTech to be a reputable source that won't get emotional.

    I would love to know if you guys have any eta of any companies manufacturing 16GB DDR3 unbuffered non-ECC ram sticks, though.
  • SilthDraeth - Wednesday, January 15, 2014 - link

    Going to go with a few other people here Ian, and Rahul, you guys point out that AMD wants to be able to play said games at 1080p at 30 frames per second.

    And yet, you didn't find a setting in your benchmark games that ran 30 fps, at 1080p, and then duplicated the settings for the other systems. I understand this will take a bit more work, but I would like to see it running Sleeping Dogs at 1080p, what settings where needed to hit 30fps, and then see what fps the rest of the systems hit at the same settings.

    Can you please update this review with that information?
  • yottabit - Wednesday, January 15, 2014 - link

    I'm very disappointed to see meaningful conclusions attempted to be drawn from benchmarks of 2-6 FPS in an Anandtech article. Saying things like "The Iris Pro really suffers in Sleeping Dogs at 1080p" is ridiculous when all the FPS are < 7. More useful info would have been about why the Iris pro gets hit harder... I'm assuming because the eDRAM is less and less effective at higher res and settings, and Intel has yet to solve the memory bandwidth issue. Obviously the Iris Pro has the raw GPU horsepower because it's able to keep up fine at the lower resolutions.

    I'm more impressed at how far Intel has come than AMD (who has historically enjoyed a large lead) in terms of iGPU tech. Thinking back to things like the GMA graphics and I'm very happy to see Intel where they are today.
  • yottabit - Wednesday, January 15, 2014 - link

    It's also pretty bad to say things like a very easy to miss ambiguous line saying "Unfortunately due to various circumstances we do not have Iris Pro data for F1 2013" and then reference the same charts saying "none of the Intel integrated graphics solutions can keep up with AMD"
  • duploxxx - Wednesday, January 15, 2014 - link

    Obviously the Iris Pro has the raw GPU horsepower because it's able to keep up fine at the lower resolutions.

    you just proven yourself that you have no idea, since its the other way around.....
  • yottabit - Wednesday, January 15, 2014 - link

    I doubt they were CPU bound in those instances, which seems to be what you seem to be implying

    There is a difference between being GPU bound and being GPU bound at certain settings and resolutions. I would assume the Iris Pro is going to suffer heavier from increases in resolution and detail because of its 128 MB eDRAM. If we could have seen increased quality testing at lower resolutions this would help affirm this. For instance shader intensive testing at a lower resolution...
  • yottabit - Wednesday, January 15, 2014 - link

    Actually, we know they weren't CPU bound at lower resolutions, because the 6750 discrete card showed consistently higher results than the Iris Pro and AMD. If it were CPU bound you would think you'd see the same results with the 6750.

    What I was trying to say is that the Iris Pro is suffering disproportionately from some sort of scaling, and the article does little to compare what that is and what the advantage of the AMD is. Does the AMD have more shader power and that's why its able to scale better at high quality settings? Or does it have better memory bandwidth management and that's why its able to scale better at high resolutions? It's obviously scaling better somehow because the Iris Pro beats it in many benchmarks at low res but loses out at high res. Because the quality and resolution are coupled it's hard to learn what's going on. It might be a good system to use for Anandtech Bench but I would like to see testing data that is specific to the scope of the articles...
  • ericore - Wednesday, January 15, 2014 - link

    This is the least impressive review I have ever seen on Anandtech; it's not horrible but its not anything near the usual fantastic mark I would give. I did like the bit on overclocking, but found the whole benchmark section completely designed without thought (or half-ass done), quite frankly you can remove the whole thing from the article. And where is the overclocked Kaveri in the benchmarks. First time, I've had to use other review sites.

    At 200$ cad, Kaveri will need a price cut if AMD expects this thing to sell well. No way that's worth 200$, 160 tops.
  • MrHorizontal - Wednesday, January 15, 2014 - link

    HSA, hUMA and Mantle are all very interesting, but as has been pointed out many times, it's the API's that make or break a platform. On this note, there isn't a thing as a 'heterogenous' API. I can see situations where Mantle would help with Math acceleration in HPC contexts and also see where hUMA makes a lot of sense in desktop contexts. The HSA foundation has it spot on to standardise the instructions across all of these distinct technologies. In effect this would make HSAIL the 'holy grail' ISA. X86 would in effect be playing second fiddle to this. So, yes the real spur point is as mentioned - making the compilers, JITs and VMs aware of the stack and to use them when/if available. The issue is that there are only so many bytecodes a single program can support, so having the hardware speak the same language as an intermediary language like HSAIL means the bridge between hardware and software is made significantly easier. The proof as always is in the pudding, and it all depends on whether the design choices provided by HSAIL are good enough or not.

    You asked in the review whether it would be good to have a SoC with a much bigger GPU in it. The answer is yes and no. SoC's make a lot of sense to phone makers and heavily integrated high volume players, in particular, Apple. In fact, I'd be very surprised if Apple isn't talking to AMD about Kaveri and APU's generally. Because it's products like the iMac and Macmini that stand to benefit most from an APU - small computers driving enormous screens (if you realise that a Macmini is a firm favourite HTPC when driving a TV).

    However, while there are isolated use cases such as Consoles, iMacs, Macminis and the like for a SoC like Kaveri, what I'd like to see is some more effort on making buses and interconnects between chips beefed up. The first and most obvious low hanging fruit to target it here is the memory bus, because SDRAM and it's DDR variants are getting a little long in the tooth. RAM is fast becoming a contention point *slowing down* applications, particularly in high throughput distributed contexts.

    With AMD specifically, though, I'd like to see a (proprietary, if necessary) bus to allow all of the HSA, hUMA magic to happen betweeen a discrete CPU and GPU. In other words, I as an ISV or OEM can build a machine with a Jaguar CPU and a R9 card and employ the benefits of using a system heavily skewed to GPU usage (such a set up would be good for video walls, running 6-24 screens in a Single Large Surface Eyefinity set up). Alternatively, the bus, due to it's necessity to be quite wide should also be beefed up to access significantly more than 32GB of RAM. As a programmer, RAM is a massively limiting factor and I really could do with a high end laptop with 64-96GB RAM in it - why doesn't this exist? So buses. You saw how important HyperTransport was back in the day. Now we need a new one, a fully HSA compliant HyperTransport.

    The bus within interconnected components in a machine is also only half the problem. The next problem after that is making a bus capable to leashing together multiple machines all working together as a heteregoneous cluster.

    So yeah. SoCs are good, but there are simply too many use cases in business and industry where there is simply not enough justification to fabricate a custom SoC for a given task. Rather, it'd be far benefitial to provide all of these technologies in a modularised format, and ironically start transforming the PC to be more of a Transputer (Trannys were basically a machine with a super wide bus that you just plugged in modules. If you wanted more CPUs, plug a CPU module in. If you wanted graphics, storage etc, plug those in)

    So I think AMD are definitely on the right track - but even they say it's only the first part of the puzzle to move to a post-X86 ISA:
    - We need fully HSA-capable buses (first a HyperTransport-esque solution between discrete GPU and CPU, then a NUMA-esque solution to leash together clusters of machines)
    - We need it to be an open spec with not just AMD, but Qualcomm, ARM and Intel (though they'll need to be strongarmed into it do lose control of the X86 golden goose, but I think even they realise this with their efforts in Iris and Knight's Landing)
    - We need hardware to comply to industry standard bytecode, to meet the software people in the middle who all have to code to a specific specification

    And with that, we'd truly have an end to the PC and X86 as a dominant architecture but the ISA actually targetting the bus and the set of capabilities of all hardware modules together rather than that of a specific CPU.

    I'd also like to see an investigation or at least this question raised to AMD's engineers: why does Streamroller even need an FP unit at all? Can't the GPU effectively handle all FP work on the CPU side? Wouldn't it be cheaper/faster/better to put a fixed function emulation bridge to translate all X87 calls to GCN?
  • mikato - Wednesday, January 15, 2014 - link

    For your last paragraph, I'm pretty sure something like that has been the idea since the beginning of Bulldozer/Piledriver/Steamroller/Excavator. It has an FP unit because they haven't gotten a way to move all that work to the GPU yet.

Log in

Don't have an account? Sign up now