There has been a strong desire for a series of industry standard machine learning benchmarks, akin to the SPEC benchmarks for CPUs, in order to compare relative solutions. Over the past two years, MLCommons, an open engineering consortium, have been discussing and disclosing its MLPerf benchmarks for training and inference, with key consortium members releasing benchmark numbers as the series of tests gets refined. Today we see the full launch of MLPerf Inference v1.0, along with ~2000 results into the database. Alongside this launch, a new MLPerf Power Measurement technique to provide additional metadata on these test results is also being disclosed.
Graphcore Series E Funding: $710m Total, $440m Cash-in-Hand
For those that aren’t following the AI industry, one of the key metrics to observe for a number of these AI semiconductor startups is the amount of funding they...12 by Dr. Ian Cutress on 1/4/2021
MLPerf Releases Official Results For First Machine Learning Inference Benchmark
Since launching their organization early last year, the MLPerf group has been slowly and steadily building up the scope and the scale of their machine learning benchmarks. Intending to...12 by Ryan Smith on 11/6/2019
Hot Chips 31 Live Blogs: MLperf Benchmark
MLperf is an up-and-coming benchmark aimed at machine learning, backed by a number of industry leaders in this area.2 by Dr. Ian Cutress on 8/19/2019