It seems like the new motto for Silicon Valley for the last few years has been “Data is the new oil,” and for good reason. The number of companies employing machine learning-based AI technologies has exploded, and even a few years after all of this has kicked off in earnest, those numbers continue to grow. This form of AI is no longer just an academic thesis or curious research project, but instead machine learning has become an important part of the enterprise market, and the impact on enterprise hardware – both purchasing and development – would be difficult to overstate. This is the era of AI.

At first sight, the hardware choices for these kinds of applications seem simple: Intel Xeon CPUs for storing and preprocessing data, NVIDIA GPUs for (almost) everything AI. And indeed, this has largely been case for the last few years now. However, NVIDIA’s competitors have not been standing idly by the entire time – and that especially goes for Intel, whose enterprise market share all of this ultimately threatens. With everything from dedicated low-power inference processors to purpose-optimized Xeons, Intel is taking aim at every level of the AI market. The net result is that between all of these competitors, we’re seeing AI tackled from many different directions, and the hardware battle for AI era is insanely interesting in our humble opinion. 

Today we’re taking a look at what’s perhaps the heart of Intel’s hardware in the AI space, Intel’s second-generation Xeon Scalable processors, better known as "Cascade Lake". Introduced a bit earlier this year, these new processors are still based on the same core Skylake architecture as the first-generation products, but incorporate a number of new instructions to speed up AI performance.

And as far as new technology goes, this is certainly the most interesting aspect of Cascade Lake. While we could talk about the three to six percent general CPU performance improvement, the 56 cores of Intel’s most expensive processor ever, and the "world record benchmarks," these small improvements are close to irrelevant for the near and mid-term future of the IT world. Just look at the very first slide of the Intel press & analyst briefing. 

Internet of things, data engineering, and AI. That is where a large part of the growth, the innovation, and the future of IT will be. And this is where Intel wants to be.

Right now, NVIDIA has a virtual monopoly on the “sexiest” part of this market, which is deep learning and “massively parallel HPC” software. Thanks to a confluence of factors on the hardware and software sides, most of this software is run on NVIDIA GPUs and clusters. So to the general public, it looks likes NVIDIA owns the “AI market”, a picture that is not inaccurate, but also not complete. There’s a lot more to the AI market than just neural network inferencing, and in particular, everything that has to happen to feed the AI model with data gets very little attention. As a result, it’s neural networks and Terminator robots that get all the headlines, even though they’re just part of the of the picture. In reality, the processing web for AI applications is much more like the picture below.

In short, actual machine learning code execution is only a very small part of the software tools necessary to build and AI Application.

Before you can even start, you have to ingest data, decompress, filter, reorder, map, and shuffle it around. Once everything is sorted and shuffled, you have to aggregate the data. As ML algorithms need large amounts of data to produce good predictions, that can be very processing memory intensive. Why? Let us delve a little deeper. 

More Than Deep Learning
Comments Locked

56 Comments

View All Comments

  • Gondalf - Tuesday, July 30, 2019 - link

    Kudos to the article from a technical point of view :), a little less for the weak analysis of the server market. Johan say that Intel is slowing down in server but the server market is growing fast.
    Unfortunately it is not: Q1 this year was the worst quarter of server market in 8 quarters with a grow of only 1%. Q2 will be likely on a negative trend, moreover there is a general consensus that 2019 will be a negative year with a drop in global revenue.
    So there recent Intel drop is consistent with a drop of the demand in China in Q2.

    To be underlined that a GPU has to be piloted and every GPU like Tesla is up, there is a one or two Xeons on the motherboard.
    GPU is only an accelerator, but without a cpu is useless. Intel slides about upcoming threat from competitors are related to the existence of AMD in HPC , IBM and some sparse ARM based SKUs for custom applications.
    A GPU is welcomed, it helps to sell more Xeons.
  • eastcoast_pete - Tuesday, July 30, 2019 - link

    More a question than anything else: What is the state of AI-related computing on AMD (graphics) hardware? I know NVIDIA is very dominant, but is it mainly due to an existing software ecosystem?
  • BenSkywalker - Wednesday, July 31, 2019 - link

    AMD has two major hurdles to overcome when specifically looking at AI/ML on GPUs, essentially non existent software support and essentially non existent hardware support. AMD has chosen the route of focusing on general purpose cores that can perform solidly on a variety of traditional tasks both in hardware and software. AI/ML benefit enormously from specialized hardware that in turn takes specialized software to utilize.

    This entire article is stacking up $40k worth of Intel CPUs against a consumer nVidia part and Intel gets crushed whenever nVidia can use it's specialized hardware. Throw a few Tesla V100s in to give us something resembling price parity and Intel would be eviscerated.

    AMD needs tensor cores, a decade worth of tools development, and a decade worth of pipeline development(university training, integration into new systems and build out on to those systems, not hardware pipeline) in order to get where nVidia is now if they were standing still.

    The software ecosystem is the biggest problem long term, everyone working in the field uses CUDA whenever they can, even if AMD mopped the floor with nVidia on the hardware side, for their GPUs to get traction they would need all the development tools nVidia has spent a decade building, but right now their GPUs are throttled by nVidia because of specialized hardware.
  • abufrejoval - Tuesday, July 30, 2019 - link

    Some telepathy must be involved: Just a day or two before this appeared online, I was looking for Johan de Gelas' last appearance on AT in 2018 and thinking that it was high time for one of my favorite authors to publish something. Ever so glad you came out with the typical depth, quality and relevance!

    While GAFA and BATX seem to lead AI and the frameworks, their problems and solutions mostly fit their needs and as it turns out the vastest number of use cases cannot afford the depth and quality they require, nor do they benefit from it, either: If the responsibility of your AI is to monitor for broken drill bits from vibration, sound, normal and thermal visuals, the ability to identify cats in every shape and color has no benefit.

    The big guys typically need to solve a sharply defined problem in a signle domain at a very high quality: They don't combine visual with audio and the inherent context in time-series video is actually ignored, as their AIs stare at each frame independently, hunting for known faces or things to tag and correlate social graphs and products.

    Iterating over ML approaches, NN designs and adequate hyperparameters for training requires months even with clusters of DGX workstations and highly experience ML experts. What makes all that effort worthwhile is that the inference part can then run at relatively low power on your mobile phone inside WeChat, Facebook, Instagram, Google keyboard/translate (or some other "innocent" background app) at billions of instances: Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power.

    Few of us smaller IT companies can replicate that, but again, few of us need to, because we have a vastly higher number of small problems to solve and with a few orders of magnitude less of a difference in training:inference efforts: 1Watt of difference makes or brakes the usability of inference model on mobile target devices, 100 Watts of difference in a couple of servers running a dozen instances of a less optimized and well trained model won't justify an ML-expert team working through another five pizzas.

    As the complexity of your approach (e.g. XGBoost or RF) is perhaps much smaller or your network are much simpler than those of GAFA/BATX you actually worry about how to scale-in not out and batch dozens of training for model iteration and mix that with some QA or even production inference streams on GPUs which Linux understands or treats little better than a printer with DMA.

    Intel quite simply understands that while you get famous with the results you get from training AIs e.g. on GPUs, the money is made from inference at the lowest power and lowest operational overhead: Linux (or Unix for that matter), knows how to manage virtual memory (preferably uniform) and CPUs (preferably few); a memory hierarchy deeper than the manual for your VCR and more types and numbers of cores than Unics first hard disk had in blocks, confuse it.

    But I'd dare say that AMD understood it much longer and much better. When they came up with the HSA on their first APUs, this GPGPU blend, which allowed switching the compute model with a function call makes CUDA look very brutish indeed.

    Writing code able to take full advantage of these GPGPU capabilites is still a nightmare, because high-level languages have abstraction levels far too low for what these APUs or VNNI CPUs can execute in a single clock cycle, but from the way I read it, the Infinity Fabric is about making those barriers as low as they can possibly be in terms of hardware and memory space.

    And RISC-V goes beyond what all x86 advocates still suffer from: An instruction set that's not designed for modular expandability.
  • FunBunny2 - Wednesday, July 31, 2019 - link

    "Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power."

    when and if this capability is used for something useful, e.g. cure for cancer, rather than yet another scheme to extract moolah from rubes. then I'll be interested.
  • keg504 - Tuesday, July 30, 2019 - link

    Why do you say on the testing page that AMD is colour coded in orange, and then put them in grey?
  • 808Hilo - Wednesday, July 31, 2019 - link

    Client/server renamed again...
    There is no AI. That stuff is very very dumb. look at the diagramm above. Nothing new. Data, script does something, parsing and readout of vastly unimportant info. I have not seen a single meaningful AI app. Its now year 25 of the Internet and I am terribly bored. Next please.
  • J7SC_Orion - Wednesday, July 31, 2019 - link

    This explains very nicely why Intel has been raiding GPU staff and pouring resources into Xe Discrete Graphics...if you can't beat them, join them ?
  • tibamusic.com - Saturday, August 3, 2019 - link

    Thank you very much.
  • Threska - Saturday, August 3, 2019 - link

    What a coincidence. The latest humble bundle is "Data Analysis & Machine Learning by O'Reilly"

    https://www.humblebundle.com/books/data-analysis-m...

Log in

Don't have an account? Sign up now