The key mantra my computer science professors worked hard to drill into us at college was always “Computer science is about solving problems, computers are simply a tool we use to do it.” As fun as it is to meme about the technical interview vs. the actual job, the reality is that we actually do have to figure out how to implement things that make the business money from time to time. When that happens, the ability to work through problems is what separates the successful developers from the code monkeys who can implement pseudo-code off a user story. And given some of the technical interviews I’ve sat in, it’s not necessarily a skill that’s developed when teaching people how to code.
Continue reading »Eventually, all software becomes complex. Projects that run entirely in the command line are rare, and even simple little web applications seem to get more involved once you actually want to run them somewhere other than localhost
. Sure you have your executable package, but there’s also likely to be a bunch of things that don’t exist on your local machine, like load balancing, multiple instances, probably some sort of metrics agent running alongside the code, likely some sort of caching – the list goes on. Running code being used by others always gets more complex than something just used by you, as your code evolves to make sure no matter what people do the application doesn’t crash, error out, or get its data into an invalid state. What’s important is acknowledging that software naturally gains complexity as it moves from “local project” to “running in production,” and make sure we’re paying attention to where we’re adding this complexity, and why it’s there.
Running software on cloud providers certainly is convenient, but it’s also really easy to run up the associated bill if you’re not consciously thinking about costs (something most of us, myself included, don’t do often enough). In the vein of “DevOps,” this has led some companies to continue the trend of “taking a term meant to emphasize having actual cross-functional teams, and slapping it on something utterly unrelated,” leading to the rise of something called “FinOps.” Basically, FinOps, or “Financial Operations,” is about incorporating business concerns (namely, cost) into the development process. The official site may say that it’s a “cultural practice,” but we all saw how well that worked out with “DevOps.” This is the type of thing that gets people to start arguing that you should get out of the cloud, (because you can save tons of money). Is that worth it though?
It’s common for online applications to refer to themselves as a “platform” as soon as they have a public-facing API. Facebook is probably the biggest example of this, but it’s a fairly standard marketing tactic (I used to work for a company that did the same thing). Basically you make some public-facing endpoints, and voila, you’re now a “platform” – and developers please build stuff for us, so we can increase customer lock-in. That’s not how this actually, or ever, works – because that’s not how platforms work.
The term “exit” always irritated me when people write about startups. Especially because it only happens when a company is either bought or IPOs. I’m not saying that startups don’t use acquisition (or IPOs if you’re Twitter – never X) as an exit strategy to avoid actually making money, but a lot of times companies do this after they’ve become a profitable, self-sustaining business. Despite that, we still don’t have a clear definition of when non-retail companies stop being startups and start being plain old businesses (even if they’re small), and it needs to be fixed.
If you spend any time on any system that involves importing data, and you’ll likely have heard the phrase “garbage in, garbage out (GIGO).” It’s usually muttered (obscenities optional) right around the point a developer has given up on trying to “fix” the data coming into a system and willing to abandon users to whatever crappy, unusable, incorrect data they have floating around on their system. As long as whatever junk users were importing doesn’t actually break your code, let them suffer.
The world is full of applications that are big enough that companies need multiple development teams to work on them. No matter how these teams are organized (but I’m going to go out on a limb and guess they’re not completely cross-functional, even ignoring the hard parts). The benefits to organizing multiple software teams is they can operate independently, but the downside is that any best practices learned are hard to propagate across teams. After all, the whole point is they only need to interact with each other to inform each other of service-level changes. So how can all these development teams actually learn (and adopt) best practices?
OK, so the phrase “caching was a bad choice” isn’t exactly something you expect to hear about software development. Generally, caching is a good thing – it improves performance by reducing calls to your database or external services, saving time and resources by not re-querying or re-computing. What’s not to love? Well, as with everything else in life, the devil’s in the details, and if you don’t get those right then anything can become a failure.
Like just about everybody else who runs software post dot-com bust, I read about Amazon Prime Video re-architecting their monitoring service, and saw all the microservices vs monolith hot takes. For the best monolith vs microservice analysis on the post, I recommend CodeOpinion’s video. What I found more interesting about the post wasn’t the monolith vs. microservice debate (I agree with CodeOpinion – that’s not really the relevant point to the original article), but rather the limits to using Functions as a Service (FaaS)…as a service.
Working in software development, it’s easy to get caught up in thinking the code is everything, and everything must be in service to the code. After all, not only is the product or service we deliver code, but running on the cloud means your operations are now code, infrastructure is code..it seems like literally everything about software has been turned into “just code.” Software has eaten the world, after all, and yet, writing the code is still the least important part of the whole software development process. That doesn’t mean it’s unimportant, mind you – “least” is a relative term – but there’s so much more that goes into building and running a successful application or service than commands and semicolons.