Friday, August 11, 2023

Sapphire Rapids Core-to-Core Latency

In my previous post, I covered the latest generation of AMD CPUs (Rome + Milan). Now that AWS has released Sapphire Rapids M7i instances to GA, I’ve finally gotten my hands on some (expensive) EC2 instance time with a Sapphire Rapid CPU. As a foreword, the analysis here is fuzzier due to a couple factors:

  1. Was only able to reserve a m7i.48xlarge instance, rather than a bare metal host (thus hypervisor overhead/interference is possible)

  2. My statistical analysis on various latency overheads in the system does lack whitebox datapoints from actual performance monitoring counters

I consider the blackbox analysis here as a jumping off point for a more principled whitebox analysis on the overhead of EMIB. Comments/feedback on directions for that welcome!

Reserving a m7i.48xlarge instance I got to work. This particular Sapphire Rapids CPU reports itself as a Xeon Platinum 8488C:

Searching through Intel ARK, there is no such publicly described SKU, which means this Xeon is a custom part designed specifically according to AWS’s particular requirements. Given AWS’s volume, and known customer profiles, the existence of a custom SKU here is not at all surprising. Azure, GCP, and AWS all have deep enough pockets and a large enough set of orders to justify Intel blowing a different set of fuses to create custom SKUs.

As we discussed previously, AMD took the more aggressive tact and introduced a chiplet based architecture with multiple physical dies per socket lashed together with an on-package interconnect. My previous coverage and general industry experience demonstrated a palpable performance hit when communicating between CPU cores running on different dies. While the hit is lower than the traditional performance penalty experienced on true multi-socket NUMA machines, the performance cost is still far from trivial. AMD knew this full well and subsequent generations of AMD Zen CPUs optimized the physical topology to minimize the amount of data which needed to be transfers (increasing the size of CCDs/CCXs to avoid x-chiplet communication on fewer core pairings) and improved the latency on the underlying interconnect layer.

With that in mind, Intel has taken a longer time to move to a chiplet based approach. Initially Intel remained on a more conservative path, fabricating larger monolithic dies sporting lower core counts per socket. However, the physical limits of chip manufacturing equipment mean Intel is now forced to either shrink the physical core and cache sizes (and suffer a loss of single threaded performance through a simpler core design), or succumb to the reality they must disaggregate cores into multiple dies.

To that end, Intel has for the first time with Sapphire Rapids adopted their EMIB interconnect technology for mainstream server products. For high core count variants, Sapphire Rapids has now adopted Intel’s EMIB technology to integrate 4 separate chiplets into the same physical package:

Credits: SemiEngineering

Each Sapphire Rapids package is comprised of multiple dies connected to one another using the EMIB interconnect. Within each die are up to 15 CPU cores, each connected within the same physical die using the mesh interconnect from previous monolithic dies, such as Ice Lake. The diagram below shows the entire physical package with one of the dies annotated:

Credits: Locuza

The EMIB phy bridges are scattered around the edges of each die to provide interconnection with the adjacent dies. The implication here is that Sapphire Rapids has a more hierarchal design than Zen. Depending on which cores are communicating, either 0, 1, or 2, EMIB links must be traversed in each direction within a package. This is in contrast to Zen where either 2 links are traversed in each direction (CCX/CCD → IO die → CCX/CCD) or no links are traversed (within the same CCX/CCD).

Another implementation detail worth noting here is that each of the 4 dies per Sapphire Rapids package has it’s own UPI interconnect. However, in this 2S system, only 1 UPI link is active per package. Thus, depending on which die in each package is running the UPI link between sockets, there will be a variable amount of latency to even reach the UPI link. The worst case latency in the system will be experienced by CPU cores pairs where each core is on a die in the package which is 2 EMIB hops away from the die hosting the UPI link tile.

With those preliminaries out of the way, let’s take a look at the data collected from our 48 core Xeon Platinum 8488C CPU running in a (very expensive) dual socket M7i EC2 host:

First up, we have our full system latency map. Here I’ve normalized the latency measurements to nanoseconds (previously I kept the scale defined in rdtsc() ticks). The typical NUMA socket pattern shows up rather clearly. Cores on the same NUMA node are closer and thus communicate more cheaply. Opposite for CPU cores on different NUMA nodes. Hyperthreads sharing a physical CPU core show up neatly as the blue semi-diagonal lines. So far, this is pretty standard compared to Ice Lake (below):

There is however something interesting which shows up in the latency map which was not present in the previous Ice Lake CPUs (as seen above). In the Ice Lake CPU, the latency across each sub-box representing in-NUMA node pairs vs. out-of-NUMA node pairs is fairly uniform. Ie, the mesh interconnect has a (relatively) uniform latency distribution. However, for Sapphire Rapids we see a non-trivial amount of core to core latency variation within a physical package. Zooming in to a single NUMA node, we have the below latency map:

Unlike Ice Lake and earlier, there is quite a significant amount on non-uniformity within this NUMA node. In fact, the latency map here looks closer to the Zen latency maps we’ve seen before. This appears to be part of the effect of the EMIB implementation. Let’s pull up a histogram over full system latency data across both sockets:

In the above data we see 6 primary modes (though some minor nodes can also be see at ~625ns + ~825ns):

  • ~50ns

  • ~125ns

  • ~225ns

  • ~425ns

  • ~550ns

  • ~750ns

To break down the costs of EMIB hops, UPI hops, and the underlying cost of the mesh interconnect within each die, we undertake a differential analysis. First up, Hyperthreads on a single physical core require 50ns for round trip times. We consider that as a fixed cost from the software measurement and the baseline overhead to communicate across cores. Secondly, within a single socket, the distribution of 0, 1 or 2 EMIB hops is as follows:

  • 0 hop → (24*22)*4 → ~2112 pairs (~25%)

  • 1 hop → (24*24)*2*4 → 4608 pairs (~50%)

  • 2 hop → (24*24)*4 → ~2304 pairs (~25%)

Looking at a single socket CDF over single socket core pair latency we find:

The 75ns-150ns mode accounts for ~75% of core pairs, while the 175ns-225ns mode accounts for the remaining 25% of core pairs. Given the above CDF and the distribution of core pairs with each hop distance, this strongly suggests that the 0-1 EMIB hop costs (~75% of intra socket pairs) fit the 75-150ns mode, while the 2 EMIB hop costs fits the ~175ns-250ns mode. The variation within each latency distribution mode likely stems from intra-die mesh interconnect cost variation (based on the distance of the core on the mesh to the EMIB phy).

That works out to: 50ns base cost + N mesh cost = 75ns → mesh cost ~= 25ns-50ns (mesh cost within a die has variability based on different path lengths within the mesh). These numbers mesh well (pun intended) with measurements on Ice Lake from previous studies.

Next, looking at the 1 EMIB case: 50ns base cost + 25ns mesh cost (source die) + 25ns mesh cost (destination die) + N EMIB cost = 150ns → EMIB cost = 50ns. Here the data does seem to suggest a 50ns penalty to take a EMIB hop within a package. Finally, the 2 EMIB case works out to: 50ns base cost + 25ns mesh cost (source die) + 2*50ns EMIB cost + 25ns mesh cost (destination die) = 200ns. This does approximately match the distribution seen from testing, providing additional support to the cost model so far. Furthermore, it also strongly suggests that Intel implemented a mesh interconnect bypass for the 2 EMIB hop case within a CPU package (since there is no 25ns mesh cost for the intervening die).

The two groups of modes in the 50ns-225ns range are from core pairs within the same package, while the upper set of modes from 400ns-850ns are cross NUMA node core pairs. Given the 300ns gap between the lower mode in each grouping, it stands to reason the penalty to traverse the UPI (NUMA) link is ~300ns (the uppermost modes in the second group are associated with core pairs which had multiple EMIB hops within their respective dies + a UPI hop, including the 4 EMIB hop case between cores in the farthest dies in the 2S system). This is a little perplexing given the UPI cost was so much lower in Ice Lake, so unfortunately I do not yet have a great explanation here.

Matching the above differential analysis with the set of all possible paths through the systems, we arrive at the following cost estimates:

  • 0 EMIB hops

  • 1 EMIB hop

  • 2 EMIB hops

  • 1 UPI hop

  • 1 UPI hop + 1 EMIB hop

  • 1 UPI hop + 2 EMIB hops

  • 1 UPI hop + 3 EMIB hops

  • 1 UPI hop + 4 EMIB hops

Overall, the above model approximately matches the above distribution and various modes, but does not entirely match the full set of modes in the distributions. Notably, the higher values, around 750ns+ do not neatly fit into the above cost model. It’s entirely possible (in fact, likely) the above model is too simplistic and neglects additional higher order costs which contribute additional latency for the longest paths in the system. Given personal time constraints on my end, I’m unable to dig more deeply and need to defer a further investigation into these costs (using performance counters and other tools) for a later time.

With EMIB, Intel has joined the chiplet club for mainstream server CPUs (Intel used EMIB based chiplet packaging for FPGAs and Foreveros stacking on low power Lakefield client CPUs). EMIB adoption in Sapphire Rapids enables Intel to rejoin the core count war and scale up to 60 core per socket in their halo SKUs. However, while EMIB does appear to be a fairly well optimized interconnect, it is neither free nor transparent from a performance point of view. Adding additional circuitry and physical distance across the CPU package to traverse, EMIB will incur a penalty when communicating between cores on different dies within a package.

Based on absolute latency values, and assuming our breakdown for the cost of each hop, it would appear the penalty for an EMIB hop alone is about 16% of the cost of a UPI hop (50ns vs. 300ns), but additional mesh hops at the source and destination dies will add additional cost. All in all, not as painful as the chiplet interconnect in AMD Zen. Also, my initial take when I saw lack of all-to-all EMIB die interconnections on each package (implying the existence of the 2 EMIB hop intra-socket path) was concern about the hit latency would take for the 2 EMIB hop case case. It appears Intel did work to minimize that hit through some level of bypass around the mesh interconnect within the intervening die.

In spite of these latency costs, I’d imagine for many workloads, EMIB won’t be a significant issue. These high core count CPUs normally find their way into servers at hyperscalers, who then carve up those servers into much smaller VMs. With Amdahls law in effect punishing highly parallel workloads and VM packing in play, many workloads will never span more than 1 die, much less 1 physical socket. As such, workloads which would experience the most significant performance impact must both:

  1. Scale up to the entire package/pair of sockets

  2. Exercise tightly coupled concurrency with frequent core-to-core communication

Not many workloads tick both of these boxes (the combination of the two factors was already a known scalability risk, so software tries to avoid this “devil’s corner” of behavior). Therefore, so long as the majority a workload is kept bound to 1-2 Sapphire Rapids dies, the impact is likely minimal in practice.

It’s interesting to note that with the upcoming Emerald Rapids die shrink on Sapphire Rapids, Intel is actually somewhat back-tracking from chiplets (at least temporarily for ER), scaling back from 4 → 2 chiplets. I’d expect with ER undergoing 2 → 4 die reduction from Sapphire Rapids, combined with improvements to the EMIB interconnect, to flatten the latency hierarchy just as AMD did going from Rome → Milan → Genoa (which I still need to benchmark).



from Hacker News https://ift.tt/KjxGJue

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.