Intel announced the third generation of its Xeon Scalable server-processor series, which includes more than three dozen new chips based on the company’s long-awaited 10-nanometer manufacturing process and packed with security and AI features.
The new chips, codenamed Ice Lake, took a long time to develop due to Intel’s delays in bringing its manufacturing process down to 10nm. AMD is at 7nm, thanks to its manufacturing partner TSMC, and its Epyc processors are slowly but steadily eroding Intel’s market share.
The Ice Lake series, according to Intel, has a 20% increase in the number of instructions that can be executed per clock cycle over the previous generation, thanks to a smaller process node that allows them to cram more transistors into the box.
The new top-of-the-line Xeon Platinum 8380, for example, has 40 cores and 80 threads at a base frequency of 2.3GHz. The previous high-end model had 28 cores and 56 threads. When compared to Intel’s previous-generation server CPUs, customers can expect a 46 percent performance boost in “common data-center workloads.” Ice Lake-based servers can run 2.65 times faster than a five-year-old server, according to Intel.
The device supports up to 6TB of machine memory per socket, eight DDR4-3200 memory channels per socket, and 64 PCIe Gen4 lanes per socket.
40 chips for three markets
Intel is launching a total of 40 chips for three separate markets. The new Xeons are designed and configured for cloud workloads, and they enable a wide variety of service environments for cloud providers.
Intel’s network-optimized N-SKUs are optimized for different workloads and performance levels and are designed to support network environments. Intel reports that this generation of Xeon Scalable processors outperforms the previous generation by 62 percent on a wide range of widely deployed network and 5G workloads.
The new processors provide the performance, security, and operational controls needed for AI, complex image or video analytics, and consolidated workloads at the intelligent edge. The platform outperforms previous generations in terms of AI inference efficiency for image classification by up to 1.56 times.
Intel also priorities security, which AMD Epyc has possessed since its first generation. Navin Shenoy, executive vice president and general manager of Intel’s Data Platforms Group, touted the Xeon’s new Software Guard Extension (SGX), which enables the CPU to convert parts of a server’s memory into protected “enclaves” for storing sensitive data including encryption keys, during an online press briefing. Other programmes running on the same server cannot access data in protected enclaves, even though they have complete administrator access to the system.
The latest Xeon also provides Intel Crypto Acceleration, which is aptly named. It improves the efficiency of major cryptographic algorithms like AES, SHA, and GFNI, allowing for real-time encryption without compromising performance.
Despite the fact that Ice Lake was only released today, Intel has already sold over 200,000 units to early adopters. OEMs like Cisco, HP Enterprise, Dell, and Lenovo were among the first to announce support for the latest Xeon Scalable, and there will certainly be many more.