Tuesday, January 26, 2021

Why Does It Take So Long to Build Software

Worried you’ll miss us?

Subscribe to get our articles and updates in your inbox.

Why does it take so long to build software? We hear variations of this question frequently: Why is building software so expensive? Why is my team delivering software so slowly? Why am I perpetually behind schedule with my software?

There is a good reason we hear these questions over and over. Businesses need more and more custom software every day in order to stay competitive, and yet it feels like as time passes the speed at which we are delivering software is stagnating, or worse, getting slower.

I’d like to talk to you all about why this is, but in order to explore the topic, I need to introduce you to a topic that is near and dear to my heart: Essential complexity and accidental complexity.

Different types of complexity? That’s complex.

Any time you’re solving a problem, not just software problems, there are two types of complexity:

  1. Essential complexity – This is the complexity that is wrapped up in the problem. You can’t solve the problem without tackling this complexity. This is also sometimes referred to as inherent complexity.
  2. Accidental complexity – This is the complexity that comes along with the approach and tools that you use to solve a problem. This complexity isn’t part of the actual problem you’re solving, this is the complexity that you bring in with your solution. This is sometimes referred to as incidental complexity.

This idea was introduced to us by Fred Brooks’ seminal paper “No Silver Bullet – Essence and Accident in Software Engineering”. Think of it like this, if you’re trying to solve a math problem, the essential complexity is the understanding of math required in order to actually calculate a solution. If you want to solve the problem, you’ll have to learn the math required (or find someone who knows it). You can’t escape the math if you want to solve the problem.

Ess vs Acc

Here comes the accidental complexity.

Let’s pretend that this is a challenging math problem, and doing it all in your head would be really unproductive. In that case, you’ll want to use a calculator. This is the accidental complexity. Remember the first time you tried to use a graphing calculator for something more than basic math? The accidental complexity is learning how to use that silly TI-83 to enter in all of the complex math to help you solve your problem. You didn’t need to use a calculator, but you knew it would help, and probably wouldn’t be too hard to learn.

But let’s pretend for a minute that you are familiar with Mathematica. Mathematica is an incredibly powerful and complex piece of software, but since you already know it, you decide to solve your problem using it. You’ve already made the investment in learning Mathematica, so it wasn’t a ton of extra effort for you, but you’ve just increased the accidental complexity of your solution by an astronomical amount.

A few weeks later a colleague of yours is in a similar situation, and remembers that you solved a very similar problem. They come to you to see how you solved the problem and you send them the Mathematica project. What do you think will happen at this point? Do you think they will learn Mathematica? Nope. They are going to figure out a different way to solve the problem, or try to make you solve it for them.

As you can see, these two kinds of complexity come from different places, but they are inextricably linked. You can’t solve a problem without some accidental complexity. Even a pencil and paper brings along some miniscule amount of accidental complexity.

You can’t solve a problem without some accidental complexity.

How does this apply to software?

This may come as a surprise to you, but the real revolution in software over the last 20 years has been the drastic reduction in the ratio of essential to accidental complexity. DHH used the term “conceptual compression” to describe this force and how it has changed our industry for the better. The proliferation of open source frameworks and libraries has been the most powerful force for reducing the amount of accidental complexity in software systems over the last two decades.

The amount of code required to solve business problems versus 20 years ago has been reduced by an order of magnitude, and so you would think that creating software would be an order of magnitude faster than it was back then. That doesn’t seem to be happening though, so why not? What is happening?

Software has steadily become easier to create, but while that has been happening, other phenomenon have been occurring concurrently:

  1. We are asking more and more of our software.
  2. The volume of software within companies is exploding.
  3. The pace of new technology adoption is increasing.

We are asking more and more of our software.

Even though we are leveraging more and more external tools and libraries to create our software, which should make creating software easier, we are constantly demanding more and more from our software. This alone has offset a huge amount of the gains. If we were still trying to build 2000 era web applications with modern tools, we actually would be seeing tenfold (or more) increases in the productivity of software construction.

Demand

But things don’t stand still, and what both consumers and businesses expect from software has been increasing rapidly. We expect software to do so much more than we did 20 years ago. And as we build these larger and more feature rich applications, in order to keep them reliable, functional, and understandable we have had to change the way we build software.

Here are just a few examples of the changes that we’ve seen across the industry over the last two decades:

  1. Source control – Source control has been around this whole time, but it hasn’t always been as universal as it is now. Don’t think this adds accidental complexity? Go ask a junior engineer using Git for the first time what they think.
  2. Automated Testing – We have introduced a lot of testing and testing tools. We do acceptance testing, integration testing, unit testing, etc… This adds a significant amount of accidental complexity to the project, but at the benefit of ensuring that the software delivered is high quality and functions as expected.
  3. Splitting it up – As a system grows in complexity, the number of possible connections and interactions between components grows quadratically. This means that at some point, if software isn’t well designed, these interactions will continue to grow until the software sags under its own complexity. Breaking systems apart, especially if distributed over a network, brings along an enormous amount of accidental complexity.
  4. Specialization – As web applications have become more complicated, we have started to introduce a lot of specialization. Whereas in 2000 it wasn’t uncommon at all for a software engineer to design the UI, build the UI, and build the backend of an application, in 2020 this is now a handful of roles. Often a team building a web application will consist of a UI designer, UX designer, frontend software engineer, backend software engineer, and DevOps engineer. In larger orgs you’ll mix in folks with even more specializations around security, architecture, data management, data science, etc… All of these extra roles allow us to build software at a larger scale, but the tools and processes required to orchestrate teams like these introduce a huge amount of accidental complexity.
  5. Infrastructure automation – To build larger and more complex environments to operate a growing number of applications we have begun to automate their creation and maintenance. This allows us to more easily manage environments at scale, but pulls in a whole suite of tools and knowledge needed to do this effectively. The amount of complexity brought in by some of these tools can be immense, leading to DevOps becoming a dedicated role on most large teams.
  6. Frequent deployments – Because applications are growing in size and complexity, we need to deliver in smaller increments to reduce risk. In order to accomplish this we have introduced the concepts of continuous integration and continuous deployments. Again, this is wonderful for delivering software at scale, but it brings accidental complexity from the myriad of tools and skills needed to build and operate these pipelines.
  7. Multiple devices and form factors – We used to be able to say that our software was being used on a handful of known resolutions inside of a single operating system. Now our applications need to run on desktop, laptops, and mobile devices across a huge number of platforms. Often we will have native mobile applications as well as web applications. Maybe throw in some IoT applications and watch applications while you’re at it. This allows us an enormous amount of flexibility in where and how we access our data, and is a change that has transformed our society, but undoubtedly added complexity to the software construction process.

The volume of software within companies is exploding.

Even before reading the section above, you probably had a pretty good idea of how demanding more from our software and building in multiple form factors can lead to increasing complexity. But on an individual application basis, how does having more software within an enterprise increase the complexity of building out a single application?

The answer is straightforward. It doesn’t, except when you want that software to interact with other software. The more software that exists within a company, the more overlap between systems there is, which means that different systems need access to the same data in order to function. This means even more systems to store the shared data, and integrations between all of them.

As an example, let’s say you’re an office chair manufacturer in 2000 and you don’t have a web presence yet. You need to build an inventory system for your company and so you work to build out software to do just that. That inventory system is used by the folks in the warehouse, and you can run nightly reports to get inventory levels and those reports can be sent to folks throughout your company. The system is relatively standalone, and everyone is okay with nightly reports. Things just don’t move very quickly.

Fast forward to 2020 and your inventory system is far from standalone. You have partners that can push orders directly into your systems, you have a web storefront that gets real-time inventory updates and updates inventory as orders are placed. Your inventory system is integrated directly with your shipper so that you can automatically generate shipping labels and schedule pickups. You sell your products directly on Amazon and so your inventory system is integrated directly with the third party software that manages that process. The folks in your warehouse are using mobile devices to locate, scan, check-in, and pick inventory, so you probably have a mobile solution to manage all of that.

As systems proliferate, and take over all aspects of business operations, they start to overlap more and more until nothing can fulfill its needs without integrating with a dozen other systems. While this has provided an unprecedented amount of productivity and automation, it has introduced a significant amount of, you guessed it, accidental complexity around all of the data movement and integrations.

Marc Andreesen famously coined the phrase “software is eating the world”, this process is accelerating with no end in sight.

The pace of new technology adoption is increasing.

Back in 2000 you generally bought your platform from a single vendor such as Microsoft, Sun, or Borland and you might buy a few components, but you had your entire ecosystem from a single vendor. You were limited in what you could accomplish by what your vendor supported, but the amount of external tools and technologies you were adopting and integrating was relatively small.

In order to keep up with the rapid changing technology landscape, companies started to adopt more open technologies that evolve at a rapid clip. This came with huge advantages, allowing you to accomplish feats with these tools that you could only have dreamed of previously. But switching tools frequently comes with a cost, you end up introducing a lot of accidental complexity into the process.

While using a bleeding edge tool might give you performance in some areas, the newer it is, the more you’re going to feel the pain of supporting it. Also, the earlier you adopt a technology, the more pain you’ll experience as it grows and matures into a tool that is useful to a wide swath of users. Balancing the gain of leveraging a new technology with the pain that comes along with its use is something that technologists have been struggling with for a very long time.

We now find ourselves in a world where being able to sift through the avalanche of tools, frameworks, and techniques to pick out the ones that are useful (and might be around for longer than 6 months) is an incredibly valuable skill. But if you’re not careful, grabbing unproven new tools or frameworks can have a detrimental effect. They can lead to a ton of accidental complexity, or even worse, a dead end if that framework dies off before crossing the chasm.

Is there hope?

There are certainly more reasons we could discuss regarding why building software takes so long. Things such as business needs changing more rapidly, enterprise architecture standards, or an increased emphasis on security. But the point is that what we are building in 2020 barely resembles the software we were building back in 2010, much less in 2000, and that is for the most part a good thing.

However, there are some downsides. It feels like we have returned to a point we were at in the 2000 to 2007 timeframe where every application was being constructed using the same tools, and many of those tools are getting progressively more complicated. Many of the tools and frameworks that are now popular are coming out of large organizations that solve problems that many businesses don’t have.

Because of this many smaller and medium businesses, and even departments within large organizations, are finding that their ability to execute on software is diminishing rapidly and they can’t figure out how to turn it around. They have started to turn to low-code and no-code walled gardens in order to increase the pace of development, but in many cases they are crippling the functionality, lifespans, and ongoing maintenance costs of the systems they are building with these tools.

In a future post, I am going to discuss the impact of accidental complexity on software projects, and how we can more effectively avoid it while ensuring we are still meeting the needs of the business.

Loved the article? Hated it? Didn’t even read it?

We’d love to hear from you.

Reach Out


from Hacker News https://ift.tt/3lZh0Fb

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.