Saturday, May 30, 2020

Powers of Two (2017)

There are a few "best practices" that I've been able to do without, that I previously thought were absolutely essential. I would think that's a function of a few different factors, but I'm curious about one in particular.

For context, let me explain what we've been doing. It's not revolutionary, or even particularly interesting. If you squint it looks like XP.

We sit next to our users. It gets loud sometimes, but it's the best way to to stay in touch and understand what's going on.

We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.

We practice TDD. Our tests run fast (usually 1 second or less, for the whole suite) and we run them automatically on every change as we type. We generally test everything like this, except shell scripts, because we've never found a testing approach for scripts that we liked.

We refactor absolutely mercilessly. Every line of code has to have a purpose that relates directly back to value to the company. If you want to know what it is you can generally comment it out and see which test (exactly one test) fails. We don't go back and change things for the sake of changing them, though. Refactoring is never a standalone task, it's always done as part of adding new functionality. Our customers aren't aware if/when we refactor and they don't care, because it never impedes delivery.

We deploy first, and often. Step one in starting a new project is usually to deploy it. I find that figuring out how you're going to do that shapes the rest of the decisions you'll make. And every time we've made the system better we go to production, even if it's just one line of code. We have a test environment that's a reasonable mirror of our prod environment (including data) and we generally deploy there first.

Given all that, here's what we haven't been doing:

No formal backlog. We have three states for new features. Now, next, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.

No project managers/analysts. Our mentality on delivering software is that it's like running across a lake. If you keep moving fast, you'll keep moving. We assume that the value of our features are power-law distributed. There are a couple of things that really matter a lot (now and next), and everything else probably doesn't. We understand a lot about what is valuable to the company, and so the responsibility for finding the right tech<=>business fit best rests with us.

No estimate(s). We have one estimate: "That's too big" Other than that, we just get started and deliver incrementally. If something takes longer than a few days to deliver an increment, we regroup and make sure we're doing it right. We've only had a couple of instances where we needed to do something strategically that couldn't be broken up and took more than a few weeks.

No separate ops team. I get in a little earlier in the day and make sure nothing broke overnight. My coworker stays a little later, and tends to handle stuff that must be done after hours. We split overnight tasks as they come up. Anything that happens during the day, we both handle, or we split the pair temporarily and one person keeps coding.

No defect tracking. We fix bugs immediately. They're always the first priority, usually interrupting whatever we're doing. Or if a bug is not worth fixing, we change the alerting to reflect that. We have a pretty good monitoring system so our alerts are generally actionable and trustworthy. If you get an email there's a good chance you need to do something about it (fix it or silence it), and that happens right away.

No slow tests. All of our tests are fast tests. They run in a few milliseconds each and they generally test only a few lines of code at once. We try to avoid overlapping code with lots of different tests. It's a smell that you have too many branches in your code, and it makes refactoring difficult.

No integration tests. We use our test environment to explore the software and look for fast tests that we missed. We're firmly convinced this is something that should not be automated in any way....that's what the fast tests are for. If we have concerns about integration points we generally build those checks directly into the software and make it fail fast on deployment.

No CI/Build server. The master branch is both dev and production. We also use git as our deployment system (the old Heroku style), and so you're prevented from deploying without integrating first...which is rarely an issue anyway because we're always pairing.

No code reviews. Since we're pairing all the time, we both know everything there is to know about the code.

No formal documentation. Again, we have pairing, and tests, and well written code that we can both can read. We generally fully automate ops tasks, which serves as its own form of documentation. And as long as you we search through email and chat to fill in the rest, it hasn't been an issue.

Obviously, a lot of this works because of the context that we're in. But I can't help but wonder if there something more to it than just the context? Does having a team of two in an otherwise large organization let us skip a lot of otherwise necessary practices, or does it all just round down to "smaller teams are more efficient?"



from Hacker News https://ift.tt/2OiA1CO

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.