I was inspired by the release of Apple’s M1 ARM processor to tweet about the perils of lock-free programming which led to some robust discussion. The discussion went pretty well given the inanity of trying to discuss something as complicated as CPU memory models in the constraints of tweets, but it still left me wanting to expand slightly on the topic in blog form.
This is intended to be a casual introduction to the perils of lock-free programming (which I last wrote about some fifteen years ago), but also some explanation of why ARM’s weak memory model breaks some code, and why that code was probably broken already. I also want to explain why C++11 made the lock-free situation strictly better (objections to the contrary notwithstanding).
Mandatory disclaimer: if your code always uses locks when sharing data between threads then you don’t need to worry about any of this – properly implemented locks avoid all of these data races (unlike the horrible lock in this video).
The basic problems of lock-free programming are best explained through the example of a lock-free producer/consumer pattern, with the producer thread looking like this (C++ pseudo-code with data/function boundaries omitted):
// Producer thread:
Data_t g_data1, g_data2, g_data3;
bool g_flagg_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
g_flag = true; // Indicate that the data is ready for consumption
And here is the consumer thread which retrieves and uses the data:
// Consumer thread:
if (g_flag) {
DoSomething(g_data1, g_data1, g_data2, g_data3);
This omits a ton of details (when does g_flag get cleared? How do the threads avoid spinning?) but it suffices for my purposes. The question is, what is wrong with this code, in particular the producer thread?
The basic problem is that the code relies on the three data variables being written to before the flag, but it does not enforce that. If the compiler rearranges the writes then g_flag may be set to true before all of the data is written and the consumer thread may see incorrect values.
Optimizing compilers can be very aggressive about rearranging operations – it’s one of the ways they make their generated code run fast. They may do this in order to reduce the use of registers, to improve the use of CPU pipelines, or just because of some random heuristic added to make Office XP load slightly faster. It’s not worth thinking too much about why a compiler might rearrange things, it’s just important to realize that they can and do.
Compilers are “allowed” to rearrange the writes because of the “as-if” rule which says that they have done their job as long as the program that they generate behaves “as-if” they hadn’t optimized it. Since the C/C++ abstract machine has long assumed a single-thread of execution – with no external observers – all this rearrangement of writes has been correct and reasonable, and has been done for decades.
The question then, is what must be done to stop the compiler from breaking our beautiful code? Let’s pretend for a moment that we are a circa 2005 programmer trying to make this work. Here are some bad ideas:
- Declare g_flag as volatile. That prevents the compiler from omitting the reads/writes of g_flag but, to the surprise of many, it does not prevent the problematic rearrangement. Compilers are not allowed to reorder volatile reads/writes with respect to each other, but they are allowed to rearrange them relative to “normal” reads/writes. Adding volatile does nothing to solve our reordering problem (/volatile:ms on VC++ does, but it is a non-standard extension to the language that may generate slower code).
- If declaring g_flag as volatile is insufficient then let’s try declaring all four variables as volatile! Then the compiler can’t rearrange the writes and our code will work… on some computers.
It turns out that compilers aren’t the only things that like to rearrange reads and writes. CPUs like to do this as well. This is separate from out-of-order execution (always invisible to your code), and in fact there are in-order CPUs that reorder reads/writes (Xbox 360 CPU) and there are out-of-order CPUs that mostly do not reorder reads/writes (x86/x64 CPUs).
So, if you declare all four variables as volatile then you have code that will only run correctly on x86/x64. And, this code is potentially inefficient because no reads/writes to those variables can be optimized away, potentially leading to redundant work (as when g_data1 is passed twice to DoSomething).
If you are satisfied with inefficient non-portable code then feel free to stop here, but I think we can do better. But let’s continue to constrain ourselves to the options available in 2005. We now have to make use of… memory barriers.
On x86/x64 we need a compiler memory barrier to prevent reordering. This does the trick:
g_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
_ReadWriteBarrier(); // VC++ only and deprecated, but okay in 2005
g_flag = true; // Indicate that the data is ready for consumption.
This tells the compiler not to rearrange the writes across that barrier which is exactly what we need. Another barrier may be needed after the write to g_flag to ensure that value gets written but the details are too uncertain for me to want to discuss. A similar barrier should be used in the consumer thread, but I’m ignoring that thread for now to keep things simple.
The problem is that this code is still broken on CPUs with a weak memory model. A “weak” memory model indicates CPUs that can reorder reads and writes (for greater efficiency or simplicity of implementation) and this includes ARM, PowerPC, MIPS, and basically every in-use CPU except for x86/x64. The solution to this is also a memory barrier, but this time it needs to be a CPU instruction which tells the CPU not to reorder. Something like this:
g_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
MemoryBarrier(); // Windows only, and an expensive full memory barrier.
g_flag = true; // Indicate that the data is ready for consumption.
The actual implementation of MemoryBarrier depends on the CPU. In fact, as the comment suggests, MemoryBarrier is not really the ideal choice here because we just want a write/write barrier instead of a much more expensive full memory barrier (which makes reads wait for writes to fully complete) but this is good enough for our purposes today.
I assume that the MemoryBarrier intrinsic is also a compiler memory barrier, so we only need one or the other, so our awesome/efficient producer thread now becomes:
#ifdef X86_OR_X64
#define GenericBarrier _ReadWriteBarrier
#else
#define GenericBarrier MemoryBarrier
#endif
g_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
GenericBarrier(); // Why did I have to define this myself?
g_flag = true; // Indicate that the data is ready for consumption.
If you have circa-2005 code without these memory barriers then your code is broken, and has always been broken, on all CPUs, because compilers have always been allowed to rearrange writes. With these memory barriers (implemented as needed for different compilers and platforms) your code is beautiful and portable.
It turns out that ARM’s weak memory model really doesn’t make things any more complicated. If you are writing lock-free code and not using any sort of memory barriers then your code is potentially broken everywhere due to compiler reordering. If you are using memory barriers then it should be easy to extend them to include hardware memory barriers.
The code above is error prone (where do the barriers go?), verbose, and inefficient. Luckily when C++11 came along we got better options. Prior to C++11 the language didn’t really have a memory model, there was just the implicit assumption that all code was single threaded and if you touched shared data outside of locks then god have mercy on your soul. C++ 11 added a memory model that acknowledged the existence of threads. This made it more explicit that the no-barrier code above was broken, but also gave new options to fix it, like this:
// Producer thread:
Data_t g_data1, g_data2, g_data3;
std::atomic<bool> g_flag // Look at this!g_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
g_flag = true; // Indicate that the data is ready for consumption.
The change is subtle and easy to miss. All I did was change the type of g_flag from bool to std::atomic<bool>. This tells the compiler not to elide reads and writes of this variable, not to rearrange reads and writes across reads and writes to this variable, and to add appropriate CPU memory barriers as needed.
We can even optimize this code slightly:
// Producer thread:
Data_t g_data1, g_data2, g_data3;
std::atomic<bool> g_flagg_data1 = calc1();
g_data2 = calc2();
g_data3 = calc3();
g_flag.store(true, std::memory_order_release);
By using memory_order_release we are telling the compiler exactly what we are doing so that it can use the appropriate (cheaper) type of memory barrier instruction, or no memory barrier instruction in the case of x86/x64. Our code is now relatively clean and perfectly efficient.
At this point writing the consumer thread is easy. In fact, with the new declaration of g_flag the original version of the consumer thread is now correct! But, we can optimize it slightly:
// Consumer thread:
if (g_flag.load(std::memory_order_acquire)) {
DoSomething(g_data1, g_data1, g_data2, g_data3);
The std::memory_order_acquire flag tells the compiler that we don’t need a full memory barrier – a read-acquire barrier just ensures that the data values don’t come from shared storage before g_flag without blocking other reordering.
Finishing the code so that the threads can avoid busy-waits and other problems is left as an exercise for the reader.
If you want to learn these techniques then start by carefully reading Jeff Preshing’s introduction to lock-free programming or This is Why They Call It a Weakly-Ordered CPU, and then consider joining a monastery or nunnery instead. Lock-free programming is the most dangerous hammer in the C++ toolkit and that is saying a lot, and it is rarely appropriate.
Note: writing an x86 emulator on ARM forces you to deal with this in a particularly messy way because you never know when reordering will be a problem so you typically have to insert a lot of memory barriers. Or you can follow Apple’s strategy and add a CPU mode that enforces x86/x64 memory ordering, turning that on when emulating.
References:
Most discussion is on reddit but there is some discussion on twitter and some discussion on hacker news.
from Hacker News https://ift.tt/39pudnp
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.