Sep 302024
 

As someone who follows DHH on Twitter, I’ve seen a lot of posts around his switching to Linux as his daily driver. I’ve had Linux on my personal machine for a few years now, but never put a huge amount of effort into it since it’s not my “work” computer. Still, I try to poke around with some code when I can, so I do want to have it set up for development on personal projects. I’ve always had (at least some) co-workers who would code in Vim/Tmux exclusively (and swear by it), but it’s never been for me. I used Eclipse in college, bounced around a lot between VS Code and the IntelliJ community edition, briefly dabbled in Atom, and I even used regular vim (OK, gvim) for a bit…after I stumbled across a setting to use regular Windows keyboard shortcuts1. No editor ever really stuck as something I particularly liked using, but I could at least get VS Code (for front-end work) and then IntelliJ (for Java) “good enough” but that was it. So I decided to give Neovim a try, since the Vim people I worked with always seemed passionate about how they liked their editor and because..why not? This time, I decided to do it properly, with the real Vim keybindings, not replacing with Windows versions. After a few weeks, I see why Vim people are the way they are – this just feels like a better way of doing things.

Continue reading »
 Posted by at 11:45 AM
Aug 312024
 

I’ve been working in an agile environment for most of my career (other than a little stretch as a government contractor). Pretty much all of those were versions of Scrum. Scrum also happens to be the butt of nearly every joke I’ve ever seen about the software development lifecycle, and the subject of the periodic articles and videos about why Scrum failed. On the 1 hand, it’s a shot at what I do, specifically how I do it; but on the other hand, all the crap-talking has a good point. So why is having a problem with Scrum so ubiquitous, and is it really justified?

Continue reading »
 Posted by at 5:30 AM
Jul 312024
 

When you’re trying to concentrate, 1 of the worse things that can happen is to get interrupted with what’s essentially either a) a status update or b) a request for a status update. It was 1 thing pre-2020 when working remotely was largely a thing people did as-needed, but by this point, even if you’re working in an office* (*for most developers, an big open room, but you get the idea) again, you’d think we’d have used the time where everyone was remote to learn and adopt better ways of communicating updates without having to rely on doing interrupting people.

Continue reading »
 Posted by at 11:45 AM
Jun 302024
 

I was going through some old links I had saved for later and I came back to this tweet (it’s never X and thus never a post) from Derek Comartin of CodeOpinion.com. While I’m partial to Comartin’s position (effectively domain knowledge is more valuable than experience with a particular framework/tool/library), there was a surprising amount of disagreement in the replies. Part of this is because Twitter is a bad place for adding qualifiers (like both sets of skills are important, and both can be learned on the job in any job), but the prioritization was revealing about both how a lot of people seem to think about their jobs in tech, how they think that fits into the bigger picture, and forces some uncomfortable observations on whether or not that’s true.

Continue reading »
 Posted by at 1:00 PM
May 312024
 

I’ve been against the idea of writing getters and setters off as boilerplate for a while, yet sadly that’s the prevailing idea in software development. A large part of that is probably fueled by the fact that we treat objects is nothing more than a local copy of database records. The result is that half the time we’re assuming the data is already valid and we don’t have to validate or default anything (when we’re reading it from the database), or that we just have to validate data when it first enters our system via endpoint (when creating or updating a database record). As a result, we don’t have to actually write proper getters and setters, and can lazily just use the templates our IDEs make for us, and outsource our validation to some simple annotations. It’s a tempting sales pitch, but it’s for a specific (limited) use case. Getter and setter methods, on the other hand, are universal.

Continue reading »
 Posted by at 11:45 PM
Apr 302024
 

Chelsea Troy wrote a fantastic article last September titled “What do we do with the Twitter-shaped hole in the internet?” While I don’t subscribe to the premise that Twitter is “crashing,” there were about 2 articles worth of good reading in 1 URL, so it bears a lot of consideration (It’s worth noting that I have a much smaller follower count, get less activity on my tweets, and generally tweet less original stuff vs. just retweeting, so my mileage clearly varies). Troy does a good job of discussing the things Twitter does well, areas where it’s historically been weak from a fundamental “this is how the app was intended to run” perspective, as well as good comparisons and contrasts with other communications apps, and their designed limitations. She follows up with a great post walking through a potential design for a hypothetical new social application that would be very appealing. But in the process of designing her hypothetical application was a great discussion on identifying and promoting quality that is the part that really sticks out as particularly interesting.

Continue reading »
 Posted by at 11:45 AM
Mar 302024
 

Lately I’ve been trying to run some server software in a local Docker container just to play around with it, but I ran into network issues trying to use it. It’s 2024, and it seems like everything is containerized (precisely so you can run it locally the same as you would in production) and deployed via infrastructure as code, so why shouldn’t I be able to grab the container image, fire it up, and actually use this service? I think is a result of just running our code on the cloud, and it’s something we need to explicitly be considering.

Continue reading »
 Posted by at 1:00 PM
Feb 292024
 

The natural corollary to trying to manage complexity is a desire to keep things simple. Which is, in general, a good thing to do. Simpler code is easier to maintain, easier to debug, easier to test, and just plain easier. But even though we’re on a never-ending quest to make things “simple,” it’s easy to get distracted by heuristics that aren’t really good proxies for simplicity, and as a result make things more complicated than they need to be.

Read more: Keep it simple, stupid

When talking about simplicity, we need to start with something simplicity is not, and that’s lines of code (more or fewer). To start, let’s look at the case of being overly clever, and using some obtuse 1 or 2 line bit of abstract trickery to do something that can be done explicitly over 5-10 lines of code. Yes, in that instance more lines of code are certainly simpler to understand. But there’s also times where you have dozens of lines of convoluted logic, tons of branching paths that, after some time and thought, can be condensed into a more straightforward flow can reduce the number of lines of code and be simpler. So are 1 of these instances some sort of simplicity paradox? No, but like “best practices,” it’s easy to see these examples to focus on the characteristics of a situation, rather than the principles behind the decisions.

Code simplicity isn’t about the code at all (as counter-intuitive as that may seem) – it’s about the developer having to read, understand, and work with the code. In other words, simplicity is another way of discussing code’s readability – which means emphasizing clarity and focus in the code. By the way, “code” here refers to more than just lines in {insert your favorite programming language here}. It also organization, variable and method names, and meaningful comments (the kind that discuss data state and the applicable business rules).

It’s also worth mentioning that just because something is simple doesn’t mean it isn’t powerful – the 2 terms aren’t mutually exclusive. In fact, you can very often get something that seems like it can do a lot, written by developers who seem to be able to put out new updates with ease, precisely because people put a lot of work in up-front keeping the code as simple as possible. As a result, the codebase is easier to understand (which makes onboarding new developers and reviewing new code easier), and easier to test (so you can develop faster without fear of regressions), letting developers focus their time, energy, and complexity budget on the parts of their problem domain that are actually complicated. And, spoiler alert, the most successful companies are generally the ones that manage to find ways to simplify those complicated parts too.

It’s the “saying ‘no’ a thousand times for every ‘yes'” philosophy Apple used to swear by. More “stuff” adds more complexity, and more complexity is more friction in using your product. That makes your users think more about how to use your application when they should be thinking about the thing they’re accomplishing because they used your application. Now, some products try to solve this problem by doing the thinking for you, and just making something happen automatically. It’s important to understand something – this doesn’t actually make things simpler. If you’re making an honest effort at this then you likely have a bunch of code to collect user behavior, and then use that to try to “predict” what they want to do given any context. And here’s the thing – if you’re right, it’s a slight convenience, but if you’re wrong then your software is actively angering them by doing what they don’t want. How often are you right by the way? Do you have any way of measuring that?

On the other hand, you can offer a simpler experience by letting the user easily tell you what they want, and then doing that. No need to track and capture a bunch of behavior, no need to run machine learning or intuit preferences, just simply following simple instructions. It’s the exact same output, but with a high satisfaction rate because you didn’t over-complicate things trying to think for your users.

There’s a lot of complexity involved in software, but it’s our job to reduce it as much as possible. That includes breaking the problem down into simpler chunks, keeping the logic as simple as possible, making the code as simple to comprehend as possible, making interacting with the software as simple to do as possible, and making it as simple as possible for users to end up in the state they actually wanted (as opposed to the state you assumed they wanted). If we succeed in doing that, our software is just plain better.

 Posted by at 11:45 AM
Jan 312024
 

The key mantra my computer science professors worked hard to drill into us at college was always “Computer science is about solving problems, computers are simply a tool we use to do it.” As fun as it is to meme about the technical interview vs. the actual job, the reality is that we actually do have to figure out how to implement things that make the business money from time to time. When that happens, the ability to work through problems is what separates the successful developers from the code monkeys who can implement pseudo-code off a user story. And given some of the technical interviews I’ve sat in, it’s not necessarily a skill that’s developed when teaching people how to code.

Continue reading »
 Posted by at 11:45 AM
Dec 312023
 

Eventually, all software becomes complex. Projects that run entirely in the command line are rare, and even simple little web applications seem to get more involved once you actually want to run them somewhere other than localhost. Sure you have your executable package, but there’s also likely to be a bunch of things that don’t exist on your local machine, like load balancing, multiple instances, probably some sort of metrics agent running alongside the code, likely some sort of caching – the list goes on. Running code being used by others always gets more complex than something just used by you, as your code evolves to make sure no matter what people do the application doesn’t crash, error out, or get its data into an invalid state. What’s important is acknowledging that software naturally gains complexity as it moves from “local project” to “running in production,” and make sure we’re paying attention to where we’re adding this complexity, and why it’s there.

Continue reading »
 Posted by at 12:45 PM