October 07, 2019
Instrumenting the performance of custom code (the code you write, not the libraries you require) in web apps has been a thorn in my side for years. Yes, we have a custom instrumentation API, but raise your hand if you enjoy sprinkling your code with this? Anyone?
Having a custom code instrumentation blackhole doesn't matter if your app spends almost all of its time in common libraries that Scout instruments by default (ex: ActiveRecord, Redis, View Rendering, and HTTP calls). However, it's frustrating when you see a trace like below:
These types of traces are more common than we'd like. That's because we've found that the time spent in custom code during a typical web request is second only to database calls:
With time spent in custom code so significant, why haven't we (or other APM vendors) done much to provide easier visibility of your custom code performance? Well, it's a hard problem. Almost all APM agents use a similar process for installing instrumentation. We explicitly wrap our instrumentation around calls to popular libraries. Since we don't know the code you are writing ahead of time, we can't pre-install instrumentation for your own code. We also can't just instrument every method call in your app as it would add way too much overhead for production use.
We haven't ignored the custom code black hole just because it is a hard problem. In fact, we've experimented with ways to solve this almost since our inception. One of these experiments, ScoutProf, was released as BETA but hasn't made the leap to General Availability (GA) for three reasons:
The awkwardness of the above three issues didn't leave us excited making ScoutProf generally available (or revisiting sampling-based profiling). In late 2018, Scout's Dave Anderson came up with an idea: instrument custom code similar to how RuboCop analyzes code and formatting. AutoInstruments wraps instrumentation around code expressions in Ruby on Rails controllers by inspecting Ruby’s Abstract Syntax Tree (AST) as code is loaded. These code expressions then appear in traces, just like the many libraries Scout already instruments. We isolate the instrumentation to controllers (and not every file in an app), so overhead is limited.
With that spark, AutoInstruments was born. Working alongside Samuel Williams, we released AutoInstruments in BETA in early September. I'm really happy with the results so far. Below is a look at two similar requests to the same controller-action in one of our own Rails apps. The screen on the left is from New Relic (which doesn't have autoinstrumentation technology). Scout is on the right:
In New Relic, 67.5% of time is allocated to uninstrumented application code. In the Scout trace, just 1% of the time falls into this generic catch-all custom code bucket. AutoInstruments was able to identify that 61% of time is associated with one custom method call:
TransactionTrace::GroupCollection.new(@span_trace). Beyond that, every autoinstrumented code expression comes with a backtrace. With our GitHub integration, we're able to show you the code associated with this segment of the trace:
Now the above screenshot is just one trace - how does Autoinstruments perform overall once enabled? We've found that apps using Autoinstruments have spent nearly five times less time in uninstrumented custom code. That instrumentation black hole is mostly gone:
To try AutoInstruments, follow the simple upgrade steps in our docs. We also have a detailed FAQ on AutoInstruments which explains more of the technical details, overhead implications, and more. As always, email firstname.lastname@example.org if you have questions.