Make no mistake: Intel’s Xeon Processor Scalable Family, based on the company’s Skylake architecture, is about much more than revving up CPU performance. The new processor line is essentially a platform for computing, memory and storage designed to let data centers — groaning under the weight of cloud traffic, ever-expanding databases and machine-learning data sets — optimize workloads and curb operational costs.
In order to expand the market for its silicon and maintain its de facto processor monopoly in the data center, Intel is even starting to encroach on server-maker turf by offering what it calls Select Solutions, generally referred to in the industry as engineered systems — packages of hardware and software tuned to specific applications.
“They’re moving up the food chain,” said Patrick Moorhead, principal at Moor Insights & Strategy. “It’s the first time that I feel like Intel can say they are delivering optimized workload-based computing.”
The Xeon Scalable line, officially unveiled Tuesday at an event in New York, offers up to 28 cores per processor and brings together a variety of integrated accelerators and complementary fabric, memory and storage technology, some of which the company did not have until recently, Moorhead notes. It also represents a new way for Intel to package, market and deliver its technology, particularly for data centers that are moving toward software-defined infrastructure.
The launch is timely because just last month AMD unveiled its Epyc line, which offers up to 32 cores per processor and may be the most serious rival in the data center that Intel has seen in more than a decade.
Intel on its part has completely re-architected the Xeon processor family for servers. As it moves from the prior-generation Broadwell architecture to Skylake, the company is bringing two previously segmented Xeon processors lines — the E5 and E7 — into a unified platform in order to provide flexibiity and scalability from entry-level network, compute and storage workloads all the way up to mission critical in-memory, database and analytics applications, said Jennifer Huffstetler, senior director of product management for the Xeon processor family.
“We’re seeing this as the biggest advancement in our data center platform in a decade,” Huffstetler said. “We think this is really the platform for network refresh whether its enterprise, communications or cloud.”
Key to the new platform, code-named “Purley,” is Intel’s new Mesh architecture, replacing the Ring Bus design Intel introduced along with the Nehalem platform in 2008.
“We were using the Ring architecture but as we added more cores and memory and I/O we were seeing a bottleneck, and so we’ve invested in this new data-center specific Mesh architecture,” Huffstetler said. The Mesh configuration optimizes the processor architecture for data sharing and memory access among all the cores and threads, allowing the processors to scale up from two sockets to eight sockets while speeding throughput.
The Xeon Processor Scalable Family offers four processor tiers, representing different levels of performance and a variety of integration and acceleration options. The tiers have a new nomenclature based on metals (Bronze, Silver, Gold, Platinum) to make the options simple to understand.
Altogether, Intel is offering more than 50 different processor SKUs at different price points via the four tiers, Huffstetler said.
Though Intel has teased the market with information on the various tiers, it’s now unveiled specs and pricing and says early customers who have done real-world tests show performance gains in HPC (high-performance computing) as well as enterprise, cloud and communications applications. Intel says that two-socket versions of the new Xeons show 60 percent average performance gains, and four-socket models 50 percent, over prior-generation Broadwell processors.
–Platinum level processor SKUs start with the number 8, offer up to 28 cores with 56 threads, run at up to 3.6GHz with two-, four- or eight-socket support and up to three of Intel’s new 10.4 gigatransfer-per-second UPI (UltraPath Interconnect ) links. Processors at this level also sport 48 PCIe 3.0 lanes and six memory channels supporting DDR4-2666MHz DRAM with up to 1.5TB topline memory channel bandwidth.
Intel has also integrated AVX-512 into these processors; these are the 512-bit extensions to the Advanced Vector Extensions SIMD instructions for x86 architecture, designed to speed up tasks like video encoding and decoding, image processing, data analysis and physics simulations. Platinum level processors also integrate, as user options, 10GB/s Ethernet and 100GB/s Omni-Path fabric. Intel Volume Management Device lets SSDs like the super fast Optane DC P4800X and 3D NAND-based DC P4600 be hot swapped.
–Gold level processors are bifurcated into 61XX and a 51XX series SKUs, offering up to 22 cores with 44 threads, two- or four-socket support, three 10.4 GT/s UPI links, and AVX-512. They top out at 3.4GHz and also support 2666MHz DDR4 DRAM.
–Silver-level processors are tagged with the 41XX number series, offer up to 12 cores and 24 threads, topping out at 2.2GHz. They also have 2400MHz DDR4 DRAM, two 9.6 GT/s UPI links, AVX-512 and come in two-socket configurations.
–The Bronze family carries the 31XX number series, has up to eight cores and runs at up to 1.7GHz. It offers two-socket configurations, 2133MHz DDR4, two UPI links at 9.6 GT/s and AVX-512.
Recommended customer pricing runs from $213 to $10,009 for the main SKUs, and up to $13,011 for the largest-memory SKUs.
Intel has addressed security with a number of different technologies. One of them is QuickAssist Technology (QAT), which can accelerate and compress cryptographic workloads by offloading data to hardware that optimizes those functions. QAT enables data encryption to run with less than a 0.5 percent hit on performance, Huffstetler said.
QAT also can provide improved packet processing performance; integration of Software Defined Networking (SDN) and Network Function Virtualization (NFV) applications; accelerated data movement in Hadoop installations; and 4G LTE and 5G encryption for mobile gateways and infrastructure.
It’s not just the raw performance, but the improved I/O and workload optimization features that high performance computing data centers require now, according to Scott Miller, senior data center director at World Wide Technology, a systems integrator that has been testing out the new Xeon family.
“The performance required for a server to run true networking and storage is a lot higher than what server nodes were built to do before,” Miller said. “We’ll be able to get much higher throughput and density on a single node than we would have been able to do before on the older architecture.”
Those attributes not only can save operational costs, but also support software defined-infrastructure, which allows data center managers to deploy and balance workloads by using software commands.
“The processors have the ability to handle the type of I/O that comes with software-defined workloads,” Miller said.
As impressive as the performance gains and new I/O features may be, Intel faces a formidable new challenge as AMD’s Epyc processor family ships. The line tops out at 32 cores, 64 threads and 3.2GHz, while offering eight 2666MHz DDR4 DRAM channels and 128 PCIe lanes all the way from the top-of-line processors down to the low end of the range — all available in single-socket versions that save space and cost.
Though Intel’s internal benchmarks have its top Xeon Scalable processor outperforming the top Epyc chip in a test of two-socket configurations, Intel notes that the testing was done with software and workloads that may have been optimized for performance only on Intel processors.
“Architecturally AMD is going to give Intel a run for its money,” said Ashish Nadkarni, an IDC analyst. This is a big reason why Intel’s efforts to push beyond pure CPU performance are important.
For example, Intel is striking out in a new direction with its Select Solutions, starting with packaged hardware and software systems for VMWare Virtual SAN, Microsoft SQL Server and Ubuntu NFVi depployments. Intel is coming up with reference designs for the systems and is working with manufacturers including HPE, Ericsson, Lenovo, Huawei, Inspur, Super Micro, Sugon and Quanta to get them implemented and delivered.
“Intel is trying to tighten their grip on the ecosystem,” Nadkarni said. “The engineered system is their way of saying most of the functions that are being delivered outside of the Intel perimeter today will be brought inside the fold — they know that can’t differentiate simply on the basis of the CPU; they have to go beyond the CPU now.”