So I haven’t made as much progress on the actual gift registry I wanted to write this year, although I do have a Go web application that connects to a Postgres database, with an endpoint that gets the database stats, collects telemetry data, pipes it all to an OpenTelemetry stack, and returns an HTML page with the health check results (server-side rendering is a much more pleasant experience than trying to wrangle Javascript frameworks). I also have tests that confirm this is working. That took a lot longer to set up than I anticipated, but I’m really happy with what I’ve been able to do.
In learning Go, there are 2 posts that influenced how I wrote my initial testing setup. The first is this one, which I found looking for a way to run Postgres during unit testing (even though I ultimately went with dockertest), specifically this linked Tweet. The other was this article from a (apparently very) long-time Go developer about writing web applications, specifically trying to run the application in test as close to production as possible, and doing as much testing from the user’s perspective (calling the web app via REST endpoints) as possible. Both of these appeal to my existing biases about how automated testing should work, but what I was really impressed with is just how doable it is to simply run a service (or set of services) and perform automated testing on them.
Even with a 1-endpoint web application that just runs a health check by connecting to external dependencies, I’m still able to confirm that works by…connecting to the external dependencies, which are handily running in temporary containers on my local system. So far, I haven’t mocked anything even though I’m calling “external” services. In fact, I’m just setting up my test, and calling a live endpoint over HTTP, and a copy of the entire stack stood up just for testing takes it from there. It’s about a pure testing setup as I could ever hope to get.
Sure, mocking things like the database and the calls to the OpenTelemetry stack would be easier (and in the case of connecting to an observability back-end just because my health check endpoint pings it, very justifiable), but it begs the question of whether we’re too quick to jump to mocks with automated testing in other languages. I stand by my old unit testing comments about not being too hung up on “unit” and having live databases (although I’ve moved from giving the build machine a database to running something in-memory since you can just do that yourself if you’re using something MySQL-based).
The nice thing about Go is that switching to mocks is pretty straight-forward. Instead of a specialized wrapper around an object, I can just put the specific behavior I want in an interface (Go emphasizes small interfaces, with as few methods as possible, 1 being the ideal), write custom dummy implementations of the methods, and then pass my “mock” to a “real” function and everything works normally from there. It just feels good to not need to do that unless I need to test against a service I can’t run locally in a test container.
1 thing Go does well that other languages should absolutely steal is table-driven tests. Basically, you define a struct with individual test-specific-variables (inputs, settings, expected outputs), collect them all into a slice, and than iterate of those and run them as an independent test, all neatly contained in 1 method. Compare that to Java, where each scenario is it’s own test method, or Node, where there’s a hierarchy of test setups leading to individual methods for individual tests, and it creates a simpler, cleaner experience. Add in running each individual case in parallel and tests are also quick, all while still being clear about which individual scenario failed.
Obviously, none of this is perfect. My observability test container takes so long to come online that running my tests takes the better part of a minute (up from a few seconds when all I was connecting was a simple Postgres container). There may be ways to try to speed that up (spinning up the test containers in go routines with a wait group to make sure everything’s started before continuing), but for now it’ll work. And if I decide that having a live observabiilty container to call a live health check on isn’t worth the time it takes to spin up the container, I can always wrap the call to get the observability health status in an interface and write a mock later.
Re-visiting test-driven development
Looking back on my original unit testing post, I think the only thing that I’ve changed is my opinion on test-driven development. I’ve been trying it lately, and I’m starting to like it. It’s not that I’ve started working somewhere with more complete requirements that are more set in stone, but rather a change about how I’ve started looking at testing, and test-driven development in particular.
My first shift in thinking is an attempt to avoid letting perfect be the enemy of good. Rather than trying the ensure that every possible scenario is tested (which is increasingly impossible the bigger and more connections the application you’re working on has), I’ve been trying to focus on tests that verify the code is “good enough” to move on. Mentally, I’m thinking of testing in the context of automated tests on my machine (e.g. mvn clean test
or make test
) to running the application locally on my machine to running in QA, to running in a staging environment to running in production. So every step in the process should be verifying that the code is good enough to move on to the next, where there’s integrations with more components (the UI and a live database, then other services in their test environments, to more realistic volume and data patterns) until eventually we’re confident enough to let users at the application. In this scenario, we don’t need out unit tests to be all-encompassing, just good enough that we feel confident introducing more complexity and uncertainty and trying again.
The other shift was away from thinking that in order to do test-driven development I needed complete certainty over requirements. Instead, I just need decisions around what the code will do and will not do. Those can come from product management, or just from me making a reasonable assumption while I wait for an official answer from product, but basically I’m coming at test-driven development from the perspective of “if I know enough to code the behavior, I know enough verify that actual behavior matches what I intended to code. In this scenario, writing the tests up-front represent a “measure twice, cut once” take on software development. As I think of scenarios that I want to make sure I account for, I write a test (with mocks at work), and use those to confirm that I’m on the right track with my code. When I think everything is written, and all the tests pass, I can run the application locally to confirm that it does what I expect without mocks, and then if that looks good I feel confident about putting in QA and letting other people start using it.
I don’t know why people don’t make time to work tests in day-to-day work. I’m guessing it’s a combination of just trying to get stuff done because you’ve got more tickets to work on, and also because even when you’re tests just run the code they’re still not the most fun thing to be working on. Tests for parts of the code you’re working on are rapid feedback loops that help you know you’re on the right track, and tests for the parts of the code you’re not working on are a canary in the proverbial coal mine making sure everything’s still doing (and not doing) what you expect.
Go makes automated testing not only pretty straightforward, but surprisingly powerful. It’s reinforcing my belief that you can probably effectively test your code by just running a localized version of the application with test data. I’ve also come around to test-driven development, not as a means of codifying finalized requirements, but rather as a means for quick double-checks of what I’m doing as well as a fast proxy whether or not I should feel comfortable actually running the server and opening the application in a browser. I’ve found that thinking through the basics of the code flow, so I can plan out the tests, has done a pretty good job of focusing and simplifying the code I write. Whatever the language, I do want to find opportunities to rely less on mocks and more on live logic, so I can push most of my test cases to calling the endpoints like actual users. “The “just build a copy of the web application with a test configuration and run it with test data” approach isn’t unique to Go, but it isn’t how we typically do tests as we’ve spent our efforts fixating on “units” rather than “what’s the best way to assure ourselves this is working?” Hopefully, we can let go of relying on mocks and just run the code instead.
Sorry, the comment form is closed at this time.