Saturday, January 1, 2022

Writing New System Software

When I see new projects being announced that are implemented in C or C++ I have to admit that I quickly lose interest. I think C and C++ are poor language choices for system software in 2021. Please don’t write system software in C or C++.

(This is mostly about system software in situations where you have a choice. For instance, if you are programming embedded devices you often have no real choice. But if you are writing a server application for Linux machines, you have a lot more choice.)

C used to be my primary language for writing server software for about 15 years. I spent a lifetime writing servers under UNIX in C and after a few years I got quite good at it. A lot of the software I wrote had to be efficient because it dealt with large amounts of data on tight hardware budgets. It wasn’t speed and efficiency just for the sake of it. It was a necessary for making solutions practical or at all possible.

I got very used to counting bytes and thinking about how you organize and process data so that you can write code in mechanical sympathy with the hardware and the operating system.

Where does speed come from?

The thing is that speed doesn’t come from “fast” languages so much as it comes from being clever about how you use the resources the computer offers. You can write reasonably fast software in almost any decent language (with a few exceptions). But in order to do so, you have to have some idea of how to design and write software in mechanical sympathy with your surroundings.

There are certain things a computer is good at, and then there are things it is not so good at. For instance, variants over the theme MapReduce came from the insight that you could get high throughput if your IO operations were largely sequential. Reading and writing large files. Storage systems (disks) in the 2000s were based on spinning disks, and both hardware and the operating systems really liked when they could just race along and read or write large chunks of data. Start doing random access to storage, and things would quickly hit a brick wall.

Of course, since then, IO performance and data processing became more complicated to reason about with SSD disks and obscene amounts of RAM. So the insights that made certain computations fast in the early 2000s have a richer set of solutions today. Which you have to understand if you want to write performant software.

Architectural choices

Architectural choices can also have an impact on speed. For instance, from the mid 2000s, statelessness in servers became very popular. But the understanding of what this gives you still isn’t universal. For the most part statelessness tries to do away with state management when you either have a horizontally scaled system or you suspect you will need to do horizontal scaling. You trade performance for simplicity, which hopefully results in correctness and easier management.

Wait, what? Stateless systems are slower!?

A lot of stateless systems are anything but stateless. But they depend on a persistence or state management layer to manage state for them. So what happens when you have N servers hammering one database? You have now moved the bottleneck to the database. Which happens to be the most expensive component type you have in terms of management, and if you go the commercial route, licenses.

(Take a look at cost of really large database setups and what practical limits in terms of scaling those provide you with).

So you start to add more components to do caching. Memcached, Redis and whatnot. Your system complexity grows along with the vulnerability to any of those components failing. And then you start to realize that caching isn’t as easy as you thought unless you are prepared to accept less deterministic behavior and having to figure out how to re-stabilize systems when you have had a restart. Consistency is still hard, but now it becomes expensive too.

And then there’s the latencies. You may think that looking up data over the network is fast, but if you start to measure, you’ll notice that it is actually many orders of magnitude slower than just having the state in RAM. Even if “sub millisecond” sounds fast a millisecond is still 1000 times greater than a microsecond and 1 million times greater than a nanosecond. Memory accesses are on the scale of nanoseconds. Memory access plus some computation (which you’ll have to do anyway) are often in the microsecond range. This impacts what design choices you can make. What degrees of freedom you have when designing.

Stateless systems are great, but they generally are not an architectural choice that we make for raw performance. It is about reduction in component complexity. (Well, in theory…)

Algorithms

A few years ago I had a conversation with a newly graduated person who was possibly trying to convince me to hire them. We talked a bit about what classes and subjects would be useful later in life and which would have little impact. I pointed out that a solid understanding of algorithms and a bit of discrete math would be very helpful when reasoning about systems.

The person in question stated that none of these subjects were of interest because design and software engineering “is just about cobbling together existing pieces fast to make something work”. I’ll freely admit that I was somewhat appalled at the absolute lack of professional ambition, but I don’t think this is an uncommon view today among young programmers.

Now, it should be noted that I “grew up” in a very academic software development tradition. The workplaces where I spent my first 20 years as a professional programmer tended to have stacks of academic papers on every desk. Not everyone of “my generation” spent their formative years in such an environment. But I have to say that I’ve noticed a shift in culture away from having a solid basis in knowledge to fashions and dogma playing a bigger role in the design choices people make.

You can hear this in how many people argue for a given solution. Rather than state the merits of a given approach, you get a list of authorities who favor the solution or what is considered “best practice”. “Best practice” is usually code for “how we/I like to do things” - not a cold, clinical, mathematical approach to proving quantifiable benefits. It is a form of intellectual laziness.

A good instinct for algorithms is important. This was true when yo had to design the nuts and bolts yourself, and it is even more true now that we write software that does a lot more than we had time to do 20-30 years ago.

Fast languages

Languages like C and C++ leave a lot of manual work for the programmer. A programmer who knows what she or he is doing can wring a lot of performance out of a computer. Right up to the theoretical limits. The thing is: even relatively benign tasks become quite difficult when you have to do them yourself without much help from the language or standard library.

For instance, to do memory management you have to learn how to keep track of what you’ve allocated to avoid leaks. This is hard. Just look at how badly Apple struggles with memory leaks now that they have switched CPU architecture and all the latent gremlins in their code start to manifest.

Apple can’t do it. You probably can’t do it much better.

Then you need to understand how memory allocators work, if you are to write software in mechanical sympathy with them. Managing memory is surprisingly hard. In C/C++ you’ll sometimes come across applications that have multpiple application specific memory allocators that a handle subset of the memory management. Not least to get around the fact that malloc() and free(), and their equivalents, have unpredictable runtime complexity.

And then there’s concurrency. There are many approaches to concurrency, from multithreading, to asynchronous processing and multiplexing to “green threads” - which is essentially multiplexing disguised as multithreading. And then there’s how you share state between different parts of the code. Ranging from sharing pointers and synchronizing access (to ensure serial access as well as ensuring visibility of changes between cores/threads), to futzing about with ownership (which is more about memory management), to transferring state instead of sharing.

People tend to stick to the technique they know, so it isn’t uncommon to see someone religiously use one technique everywhere. Even when it doesn’t fit.

Yes, you can write things in C and C++ that are very fast, but statistically, it is unlikely that you have the skill and discipline to do so consistently and at the same time deliver quality and robustness. Just because something can be done doesn’t mean you are that good. Most of the time not even the experts are that good.

Standard libraries

C and C++ do not have good standard libraries and what is available has a very weak tradition; not everyone uses them. Not even a simple majority of users. You may like STL, but that doesn’t change the fact that a huge number of companies have policies against using it. Or that many people who use it will freely admit that it is a pain in the neck to use.

There is no standard way in C/C++ to do many of the things modern software has to do, like high level networking (built-in machinery for request processing, support for common protocols etc), dealing with common data formats, doing cryptography etc. So dealing with this becomes an easter egg hunt. Which libraries you are going to use or, if none are convenient, you end up writing it yourself.

Since there is no idiomatic way to do a lot of these things knowledge becomes fragmented. You have a lot of people using a lot of different solutions to address hygiene factors. In contrast to having one, or a very small set of, idiomatic way of doing things that receive the scrutiny and attention of the whole community. And not least: results in a much deeper knowledge base and catalog of good, widely adopted solutions.

Don’t write system software in C/C++

Granted that often you do not have a choice. You can’t always pick whatever language you like. But when you can, you shouldn’t write system software in C or C++. C and C++ aren’t just whatever the most recent version of the standards are: it is the sum of all widely used versions of the language. You have to deal with the limitations imposed by the past in practical reality. Because practical reality is dictated by legacy code and tooling.

And even if you stuck to the latest and greatest, they are not good languages to just get things done. They leave you with too much responsibility that will burn up time you should be spending on writing robust, understandable and performant code that solves business problems.

I can appreciate the “macho factor” of being able to write fast software in C or C++ (or even Objective-C), but most people aren’t going to be able to do that. At least not consistently. Even if you have 20-30 years of C++ under your belt - and I know people who have that amount of experience and still struggle to live up to expectations set by languages that are more conducive to quality.

There are better languages for system programming. Languages like Rust, Go and even Java and its many variants. Languages that consistently give you more correctness and performance for the amount of talent that you actually have (as opposed to the inflated self image that all programmers tend to have).

Please don’t use C, C++ or Objective-C for system software where you have a choice. These languages are terrible choices when compared to more suitable languages.



from Hacker News https://ift.tt/30oCndb

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.