Friday, December 17, 2021

Slitter: A slab allocator that trusts, but verifies

Slitter is Backtrace’s deliberately middle-of-the-road thread-caching slab allocator, with explicit allocation class tags (rather than derived from the object’s size class). It’s mostly written in Rust, and we use it in our C backend server.

Slitter’s design is about as standard as it gets: we hope to dedicate the project’s complexity budget to always-on “observability” and safety features. We don’t wish to detect all or even most memory management errors, but we should statistically catch a small fraction (enough to help pinpoint production issues) of such bugs, and always constrain their scope to the mismanaged allocation class.

We decided to code up Slitter last April, when we noticed that we would immediately benefit from backing allocation with temporary file mappings: the bulk of our data is mapped from persistent data files, but we also regenerate some cold metadata during startup, and accesses to that metadata have amazing locality, both temporal and spatial (assuming bump allocation). We don’t want the OS to swap out all the heap–that way lie grey failures–so we opt specific allocation classes into it.

By itself, this isn’t a reason to write a slab allocator: we could easily have configured specialised arenas in jemalloc, for example. However, we also had eyes on longer term improvements to observability and debugging or mitigation of memory management errors in production, and those could only be unlocked by migrating to an interface with explicit tags for each allocation class (type).

Classic mallocs like jemalloc and tcmalloc are fundamentally unable to match that level of integration: we can’t tell malloc(3) what we’re trying to allocate (e.g., a struct request in the HTTP module), only its size. It’s still possible to wrap malloc in a richer interface, and, e.g., track heap consumption by tag. Unfortunately, the result is slower than a native solution, and, without help from the underlying allocator, it’s easy to incorrectly match tags between malloc and free calls. In my experience, this frequently leads to useless allocation statistics, usually around the very faulty code paths one is attempting to debug.

Even once we have built detailed statistics on top of a regular malloc, it’s hard to convince the underlying allocator to only recycle allocations within an object class: not only do mallocs eagerly recycle allocations of similar sizes regardless of their type, but they will also release unused runs of address space, or repurpose them for totally different size classes. That’s what mallocs are supposed to do… it just happens to also make debugging a lot harder when things inevitably go wrong.

Slab allocators work with semantically richer allocation tags: an allocation tag describes its objects’ size, but can also specify how to initialise, recycle, or deinitialise them. The problem is that slab allocators tend to focus exclusively on speed.

Forks of libumem may be the exception, thanks to the Solaris culture of pervasive hooking. However, umem’s design reflects the sensibilities of the 00s, when it was written: threads share a few caches, and the allocator tries to reuse address space. In contrast, Slitter assumes memory is plentiful enough for thread-local caches and type-stable allocations.

Our experience so far

We have been running Slitter in production for over two months, and rely on it to:

  • detect when an allocation is freed with the wrong allocation class tag (i.e., detect type confusion on free).
  • avoid any in-band metadata: there are guard pages between allocations and allocator metadata, and no intrusive freelist for use-after-frees to stomp over.
  • guarantee type stable allocations: once an address has been used to fulfill a request for a certain allocation class, it will only be used for that class. Slitter doesn’t overlay intrusive lists on top of freed allocations, so the data always reflects what the application last stored there. This means that double-frees and use-after-frees only affect the faulty allocation class. An application could even rely on read-after-free being benign to simplify non-blocking algorithms.
  • let each allocation class specify how its backing memory should be mapped in (e.g., plain 4 KB pages or file-backed swappable pages).

Thanks to extensive contracts and a mix of hardcoded and random tests, we encountered only two issues during the initial rollout, both in the small amount of lock-free C code that is hard to test.

Type stability exerts a heavy influence all over Slitter’s design, and has obvious downsides. For example, a short-lived application that progresses through a pipeline of stages, where each stage allocates different types, would definitely waste memory if it were to replace a regular malloc with a type-stable allocator like Slitter. We believe the isolation benefits are more than worth the trouble, at least for long-lived servers that quickly enter a steady state.

In the future, we hope to also:

  • detect when an interior pointer is freed.
  • detect simple buffer overflows that cross allocation classes, by inserting guard pages.
  • always detect frees of addresses Slitter does not manage.
  • detect most back-to-back double-frees.
  • detect a random fraction of buffer overflows, with a sampling eFence.

In addition to these safety features, we plan to rely on the allocator to improve observability into the calling program, and wish to:

  • track the number of objects allocated and recycled in each allocation class.
  • sample the call stack when the heap grows.
  • track allocation and release call stacks for a small fraction of objects.

Here’s how it currently works, and why we wrote it in Rust, with dash of C.

The high level design of Slitter

At a high level, Slitter

  1. reserves shared 1 GB Chunks of memory via the Mapper trait
  2. carves out smaller type-specific Spans from each chunk with Mill objects
  3. bump-allocates objects from Spans with Press objects, into allocation Magazines
  4. pushes and pops objects into/from thread-local magazines
  5. caches populated magazines in global type-specific lock-free stacks
  6. manages empty magazines with a global mostly lock-free Rack

Many general purpose memory allocators implement strategies similarly inspired by Bonwick’s slab allocator, and time-tested mallocs may well provide better performance and lower fragmentation than Slitter. The primary motivation for designing Slitter is that having explicit allocation classes in the API makes it easier for the allocator to improve the debuggability and resilience of the calling program. For example, most allocators can tell you the size of your program’s heap, but that data is much more useful when broken down by struct type or program module.

Most allocators try to minimise accesses to the metadata associated with allocations. In fact, that’s often seen as a strength of the slab interface: the allocator can just rely on the caller to pass the correct allocation class tag, instead of hitting metadata to figure out there the freed address should go.

We went in the opposite direction with Slitter. We still rely on the allocation class tag for speed, but also actively look for mismatches before returning from deallocation calls. Nothing depends on values computed by the mismatch detection logic, and the resulting branch is trivially predictable (the tag always matches), so we can hope that wide out-of-order CPUs will hide most of the checking code, if it’s simple enough.

This concern (access to metadata in few instructions) combined with our goal of avoiding in-band metadata lead to a simple layout for each chunk’s data and metadata.

.-------.------.-------|---------------.-------.
| guard | meta | guard | data ... data | guard |
'-------'------'-------|---------------'-------'
  2 MB    2 MB   2 MB  |      1 GB        2 MB
                       v
               Aligned to 1 GB

A chunk’s data is always a 1 GB address range, aligned to 1 GB: the underlying mapper doesn’t have to immediately back that with memory, but it certainly can, e.g., in order to use gigantic pages. The chunk is preceded and followed by 2 MB guard pages. The metadata for the chunk’s data lives in a 2 MB range, just before the preceding guard page (i.e., 4 MB to 2 MB before the beginning of the aligned 1 GB range). Finally, the 2 MB metadata range is itself preceded by a 2MB guard page.

Each chunk is statically divided in 65536 spans of 16 KB each. We can thus map a span to its slot in the metadata block with a shifts, masks, and some address arithmetic. Mills don’t have to hand out individual 16 KB spans at a time, they simply have to work in multiples of 16 KB, and never split a span in two.

Why we wrote Slitter in Rust and C

We call Slitter from C, but wrote it in Rust, despite the more painful build process: that pain isn’t going anywhere, since we expect our backend to be in a mix of C, C++, and Rust for a long time. We also sprinkled in some C when the alternative would have been to pull in a crate just to make a couple syscalls, or to enable unstable Rust features: we’re not “rewrite-it-in-Rust” absolutists, and merely wish to use Rust for its strengths (control over data layout, support for domain-specific invariants, large ecosystem for less performance-sensitive logic, ability to lie to the compiler where necessary, …), while avoiding its weaknesses (interacting with Linux interfaces defined by C headers, or fine-tuning code generation).

The majority of allocations only interact with the thread-local magazines. That’s why we wrote that code in C: stable Rust doesn’t (yet) let us access likely/unlikely annotations, nor fast “initial-exec” thread-local storage. Of course, allocation and deallocation are the main entry points into a memory allocation library, so this creates a bit of friction with Rust’s linking process.

We also had to implement our lock-free multi-popper Treiber stack in C: x86-64 doesn’t have anything like LL/SC, so we instead pair the top-of-stack pointer with a generation counter… and Rust hasn’t stabilised 128-bit atomics yet.

We chose to use atomics in C instead of a simple lock in Rust because the lock-free stack (and the atomic bump pointer, which Rust handles fine) are important for our use case: when we rehydrate cold metadata at startup, we do so from multiple I/O-bound threads, and we have observed hiccups due to lock contention in malloc. At some point, lock acquisitions are rare enough that contention isn’t an issue; that’s why we’re comfortable with locks when refilling bump allocation regions.

Come waste performance on safety!

A recurring theme in the design of Slitter is that we find ways to make the core (de)allocation logic slightly faster, and immediately spend that efficiency on safety, debuggability or, eventually, observability. For a lot of code, performance is a constraint to satisfy, not a goal to maximise; once we’re close to good enough, it makes sense to trade performance away. I also believe that there are lower hanging fruits in memory placement than shaving a few nanoseconds from the allocation path.

Slitter also focuses on instrumentation and debugging features that are always active, even in production, instead of leaving that to development tools, or to logic that must be explicitly enabled. In a SaaS world, development and debugging is never done. Opt-in tools are definitely useful, but always-on features are much more likely to help developers catch the rarely occurring bugs on which they tend to spend an inordinate amount of investigation effort (and if a debugging feature can be safely enabled in production at a large scale, why not leave it enabled forever?).

If that sounds like an interesting philosophy for a slab allocator, come hack on Slitter! Admittedly, the value of Slitter isn’t as clear for pure Rust hackers as it is for those of us who blend C and Rust, but per-class allocation statistics and placement decisions should be useful, even in safe Rust, especially for larger programs with long runtimes.

Our MIT-licensed code is on github, there are plenty of small improvements to work on, and, while we still have to re-review the documentation, it has decent test coverage, and we try to write straightforward code.

This post was much improved by feedback from my beta readers, Barkley, David, Eloise, Mark, Per, Phil, Ruchir, and Samy.




from Hacker News https://ift.tt/3qn8bc7

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.