As teams get bigger and the need for faster velocity increases, code quality can be difficult to uphold. Use these techniques from Michael Tweed, a principal software engineer at Skyscanner, to help.
At the beginning of a project, it’s always tempting to want to write and ship code quickly.
However, as codebases grow, and more engineers start getting involved, ensuring a high level of code quality becomes increasingly important, as engineers will be encountering unfamiliar code more often. To remedy this, having consistent standards will allow individuals to navigate this code quickly and easily.
Often, there will often be an expectation on engineering managers and senior/staff engineers to ensure that high-quality code is being delivered. But the definition of “code quality” can be very subjective, making it a difficult thing to track and improve.
What is code quality?
One of the most common definitions you’ll find for code quality refers to test coverage. Test coverage is defined as what percentage of your code is covered via automated tests, and can be measured by analysis tools available for nearly all popular programming languages.
It is therefore quite common to hear that for a codebase to be high quality it needs to have “100% test coverage”, or another similarly high number.
In recent years there has been a pushback against measuring and targeting test coverage metrics, suggesting that targeting an arbitrary percentage can be worthless in a lot of cases, and can even lead to lower-quality tests and a false sense of security. For instance, consider a basic object used for representing an API response, which simply maps fields directly with no other logic. If code coverage was being enforced, you would end up having to write repetitive tests with no meaningful value. This can also lead to engineers just writing the “easiest tests” on autopilot to satisfy the coverage requirements.
However, there is a middle ground. It’s possible to utilize code coverage metrics and checks in a way that’s not an all-or-nothing approach.
Utilizing exclusion/inclusion rules
One way to manage code coverage metrics is through exclusion/inclusion rules. These narrow the scope of source files being analyzed in your project allowing you to specify which parts of your codebase should be considered when calculating the code coverage. This can be done at the individual class level, which is useful if you are integrating with a tricky dependency that can’t be easily tested.
However, inclusion/exclusion rules can also become a powerful tool if you combine them with the architecture patterns used in your code base. By specifying where the code should have high test coverage, or what shouldn't be covered, rules can help engineers write the code in the right places.
If we go back to the previous example of representing an API response, this could be a data transfer object (DTO), which simply maps fields to pass around your code. You could therefore create a package for your DTOs and then exclude it from code coverage. You could also have a rule that is based on the class name, for example, any class ending in “*Dto”, regardless of the package, will be excluded. It’s best to keep these rules broad rather than having a large number at the individual class level, as this can quickly become unmanageable as it starts to scale.
Using this tool has multiple benefits. Not only does it allow these objects to remain untested, but if an engineer adds a DTO and it’s flagged for lack of coverage, then they know it was not placed in the right package or was not given the right class name pattern in order for it to be excluded. This gives engineers extra motivation to ensure DTOs are added to the right place and/or named correctly, promoting consistency in your code base.
By spending time to define accurate inclusion/exclusion rules, which can and should be checked into source control to be tracked and modified over time, you can ensure that tests that matter are being added to the code. As an additional step, you can even integrate static analysis tooling. Static code analysis finds potential vulnerabilities in source code, such as injections, broken authentication and access control, and insecure deserialization to ensure that the code you say should be simple, is simple. For example, verifying that your DTOs do not contain any additional logic.
Another advantage is that this provides engineers with more agency to define what should and should not be tested. A simple discussion process, such as a 1:1, to talk through changes to the inclusion/exclusion list makes engineers feel more in control when writing code. Keeping a decision log of these changes and their reasoning is also helpful to new engineers.
Combining and analyzing code coverage with other metrics
When working on large projects and codebases, it can also be interesting to combine code coverage metrics with other metrics. This could include programming language distribution or distribution across packages or modules in the code. By doing so, you may be able to identify specific areas for focus, such as if tests do not include code in a certain language, indicating knowledge gaps. This can then be used as a starting point for conversations with engineers about the code quality in those areas.
Monitoring code coverage over time
Being able to monitor code coverage over time without jumping to conclusions is another very important aspect to keep in mind. It can be easy to default to thinking that “high test coverage is good” and “low test coverage is bad”. But before making any judgments, you should default to tracking the code over a period of time.
A low coverage which is steadily increasing is a positive sign, showing that engineers are actively moving in the right direction. Here, you can engage in conversations about how to support them. On the flip side, a high initial coverage that is dropping should be a cause for concern, as this could be a result of a lack of motivation to add tests or perceived pressure to deliver quickly. These should be investigated further.
Final thoughts
Improving your code quality can be brought about by a combination of strategies including, defining clear inclusion/exclusion rules and integrating code coverage with other metrics to provide additional insights while tracking them over time.
By having engineers feel invested in the process rather than just having a target forced upon them you can drive long-term improvements.