Creeping Problems for App Performance 

Slow code is everywhere—or maybe “anywhere. “Slowness” is a spectrum. Sometimes, you triumphantly move performance bottlenecks so that the bar for “slow” always gets lower. In our experience, though, things often trend in the other direction.

For example, we eventually noticed a pattern after deployments. Our app suddenly spent a lot of time reaching out to a third-party service. This pattern was not new, but the severity had been ramping up. Wherever the bar for slow had been, this had grown way beyond it. Luckily, we use Scout APM at Scout, so we tracked the issue down and had a fix in less than an hour—a one-liner, easy-peasy. But the fix itself isn’t the interesting part, nor is the slow code by itself. It is the interaction between the two that shows a real APM superpower.

Payback Time

Slow code is everywhere, but it can often run acceptably fast for a long time. The code in question that was warming up these caches was written by an expert. It was also undoubtedly written during a taxing project involving critical third-party integrations of a sensitive nature. You know the kind. Initial assumptions were made about the volume of data that would exist in the other system; some code from another interaction with a similar access pattern was copied and tweaked, and that was that. A small amount of technical debt had just been taken on. On to the next requirement. The project went live and trucked along for several years without needing another lick of attention.

Could this expert developer have looked back at this code and found the same optimization waiting for them? Probably. Did they knowingly take on a little tech debt to get that project out faster and reduce their cognitive load? Maybe not! But in my experience, there are definite benefits to not optimizing every function before its time. Premature optimization, root of all evil, etc. This is especially true because, at the end of these several years, we had a monitoring tool that told us exactly what type of quicksand our code was wading through. The debt we had taken on, intentionally or not, became practically zero interest.

This is an underappreciated feature of performance monitoring tools, especially continuous ones that operate with you in production. You can move faster when you know that performance regressions will be easy to spot later. Adding an APM later can also be like going to a debt consolidation company for a certain type of tech debt. Lower your payments today! You’ll probably even find some debt you didn’t know you had.

I’m not advocating being sloppy; human judgment is still required, and choosing the right data structure for your access pattern, for instance, probably doesn’t take hours of wading through docs or making exploratory network requests. But when “working” and “fast enough” are the current requirements, something like Scout can help you quickly find and pay back a little debt like this when it does become a bottleneck. It’s like a little zero-interest loan, just for you. And if a part of you dies a little when you think of shipping something that isn’t as fast as it could possibly be… we understand. But think of the users. You have already made so many compromises to help them faster and better. Code that runs faster is just one of the means to that end. Launching features faster is an even bigger lever. Get to it!

lance@scoutapm.com