Log Management
Installation
Our logging solution aims to “just work” with Rails and Python applications. Other frameworks and languages coming soon!
Each App can have logs enabled individually and requires a unique Ingest Key. Visit the Logs page from an App in the Scout UI to get started.
For a Rails Application
-
Add the
scout_apm_logging
gem to your gemfile. -
Configure a few environment variables:
SCOUT_LOGS_MONITOR=true SCOUT_LOGS_INGEST_KEY=aaaa-1111-aaaa-1111 # Provided in App Logs Page
-
Deploy!
Further configuration options are available, but the above is the minimum required to get started.
For a Python Application
- Install the Scout Python logging package:
pip install scout-apm-logging
- Set
SCOUT_LOGS_INGEST_KEY
in your existing configuration or via an environment variable. This key is provided in the Logs page of the enabled App. - Add the Scout logging handler to your Python logging configuration. Here’s an example using dictConfig:
import os
from logging.config import dictConfig
from scout_apm_python_logging import ScoutOtelHandler
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"verbose": {
"format": "%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s"
},
"simple": {"format": "%(levelname)s %(message)s"},
},
"handlers": {
"scout": {
"level": "DEBUG",
"class": "scout_apm_python_logging.ScoutOtelHandler",
"service_name": "your-python-app",
},
"console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "simple",
},
},
"loggers": {
"": { # Root logger
"handlers": ["console", "scout"],
"level": "DEBUG",
},
},
}
# Apply the logging configuration
dictConfig(LOGGING_CONFIG)
This configuration attaches the ScoutOtelHandler
to the root logger with the DEBUG
log level, which will send all logs from internal libraries to Scout. Python logging is highly configurable, and we provide some common configurations to get you started.
Overview
Scout’s Log Management is designed with ease-of-use as a top priority. Sending logs to us and viewing them in the UI should be as simple as possible. By working with our existing agents, we can also gather more context and automatically apply it to your application logs. This gives you extra power to filter and search through your logs, and to correlate them with other performance data. Some highlights:
- Filter logs by entrypoint (i.e. Controller originating the activity)
- Filter logs by Custom Context (attributes - anything you have added to the Scout APM agent)
- Fast in-memory exploration
Functionality
Visiting the Logs page will begin loading the log records that we have received according to the timeframe you have selected, e.g. “Past 3 hours.” Logs will load from most-recent to oldest. We load 10K records initially, more can be loaded as needed. As the logs load, you can rapidly filter to desired Severity Levels and search/filter via regex applied to the message body and attributes. The time window can also be narrowed via horizontal selection (“brush”) on the chart. These filters apply to logs that are already loaded into the browser.
When you determine interesting lines, you can also use the pre-load filters to screen logs closer to the log storage layer, allowing you to scan longer timeframes without pulling too many records into the browser. This two-phase approach can be a powerful and flexible way to efficiently seek through a very large corpus of log records. We hope you love it!
Filtering
Pre-Load Filters
The Logs Filtering allows you to limit the logs that are loaded into the Logs View. These filters are applied to the logs in our storage system before loading them into the application. Filtering by time can especially reduce the amount of data processed and returned by our application. Keep in mind that rapid filtering and manipulation can be further performed within the Logs Table after the Logs have been loaded.
Logs can be filtered by:
- Date and Time: Using the Scout time selector, as usual.
- Message Content: A simple CONTAINS filter is available.
- Attributes: Values of any attributes that are present in the logs can be used to filter.
For Attribute filters, you will need to select the attribute key and then provide a value to filter by. However, we only display a couple of predetermined attributes, with the use of custom context, more attributes can be added to the logs. You can specify additional filters for custom log attributes and values. See the below gif for adding the ‘org_id’ custom context we have added to our application as a filter.
The four default attributes:
- ID: Log record id
- Body: Log message / body
- Severity: Log severity – debug, info, etc.
- ServiceName: The service name, usually the process name – unicorn worker, sidekiq worker, etc.
After changing any of these filters, hitting “Load” will discard the current in-memory logs and begin loading new data matching the filter criteria.
Default Attributes
The Scout Logs agent enriches Rails logs with a few attributes by default:
- entrypoint: The top-level action (e.g. Controller class, in Rails) that the log was generated from.
- location: The file and line number where the log was generated.
In addition, it will capture all key-value pairs from any Custom Context that you have set. This means logs can be filtered by any Custom Context attributes.
Logs List
As records are loading, the Logs List allows you to filter your log data in straightforward ways. These filters also determine the data sent to the rest of the statistics and charts on the screen, allowing you to focus on the most relevant Logs for any given visualization.
Severity Filter
Select any combination of severity levels to filter the logs.
Regex Filter
The Regex Filter allows you to filter the Logs List using regex that evaluates against message content as well as log Attributes (both keys and values). We hope it lets you quickly narrow things down, whatever you might want to narrow by.
Brush Tool
On the chart, you can select the brush tool to click and drag across a time range to filter the List.
Loading Controls
At the top of the Logs List are a few controls that allow you to manage the loading of data into the Logs List. These controls are useful for managing the amount of data that is loaded into the Logs List at any given time.
- Progress Info: This section indicates the amount of data that has been returned vs scanned.
- Loading Bar: This bar indicates the progress of the data being loaded. You can hover over to get an approximate numerical value.
- Pause/Resume: These buttons allow you to pause and resume the loading of data. This is useful if you want to stop the loading of data to inspect the data that has already been loaded or if you don’t want to load the default 10K Logs per load.
- Full Screen: This button allows you to expand the Logs List to full screen mode.
Usage & Billing
Usage can be broken out into two categories. Write usage and read usage. Write usage is the total bytes of uncompressed data sent to our servers in OTLP format. Read usage is the number of bytes of total uncompressed data that we traversed when performing an operation.
To view your usage, click the cog in the bottom left, which will display a list of options to choose from. Within this list click “View Logs Usage”. Here, you will have 14 days of both write and read usage streamed in.
Common Configurations (Rails)
The following configuration settings are available. These can be set in the scout_apm.yml
configuration file or as environment variables with the SCOUT_
prefix, e.g. SCOUT_LOGS_INGEST_KEY
.
Only logs_ingest_key
is required. The rest are optional.
Setting Name | Description | Default | Required |
---|---|---|---|
logs_ingest_key | The Ingest Key to use for logs | Yes | |
logs_monitor | True or false. If true, monitor logs and send them to Scout | false |
No |
logs_capture_level | The minimum log level to capture and send to Scout | debug |
No |
log_level | Log level for the agent itself | info |
No |
Additional Configurations for Rails Agent
These settings are internal and are not typically needed for normal operation, but if things like filesystem access cause issues, the following may be useful.
Setting Name | Description | Default | Required |
---|---|---|---|
logs_monitored | An array of log file paths to monitor. Overrides the default log destination detection | [] |
No |
log_class | The underlying class to use for logging. Defaults to Ruby’s Logger class | Logger |
No |
config_file | Location of the scout_apm.yml configuration file | config/scout_apm.yml |
No |
logs_config | A hash of configuration options for merging into the Otel Collector’s config | {} |
No |
logs_reporting_endpoint | The endpoint to send logs to. | https://otlp.scoutotel.com:4317 |
No |
logs_proxy_log_dir | The directory to store logs in for monitoring | /tmp/scout_apm/logs/ |
No |
log_file_path | Location where the agent should log. Either a directory or “STDOUT” | Environment#root+log/ or STDOUT if running on Heroku. |
No |
log_stdout | True or false. If true, agent logs to STDOUT | false |
No |
log_stderr | True or false. If true, agent logs to STDERR | false |
No |
manager_lock_file | The location for obtaining an exclusive lock for running monitor manager | /tmp/scout_apm/monitor_lock_file.lock |
No |
monitor_pid_file | The location of the pid file for the monitor | /tmp/scout_apm/scout_apm_log_monitor.pid |
No |
monitor_state_file | The location of the state file for the monitor | /tmp/scout_apm/scout_apm_log_monitor_state.json |
No |
monitor_interval | The interval to check the collector healtcheck and for new state logs | 60 |
No |
monitor_interval_delay | The delay to wait before running the first monitor interval | 60 |
No |
collector_log_level | The log level for the collector | error |
No |
collector_sending_queue_storage_dir | The directory to store queue files | /tmp/scout_apm/file_storage/otc/ |
No |
collector_offset_storage_dir | The directory to store offset files | /tmp/scout_apm/file_storage/receiver/ |
No |
collector_pid_file | The location of the pid file for the collector | /tmp/scout_apm/scout_apm_otel_collector.pid |
No |
collector_download_dir | The directory to store downloaded collector files | /tmp/scout_apm/ |
No |
collector_config_file | The location of the config file for the collector | /tmp/scout_apm/config.yml |
No |
collector_version | The version of the collector to download | 0.102.1 |
No |
health_check_port | The port to use for the collector health check. Default is dynamically derived based on port availability | No |
Common Configurations: (Python)
By default, attaching the ScoutOtelHandler to the root logger with the DEBUG level will send all logs from internal libraries to Scout. While useful for comprehensive monitoring, this can result in a very large volume of logs, and likely more noise than signal. Since the Python Logging agent provides a logging Handler, the source and severity of logs sent to Scout is configured via Python logging configuration.
There are many other possible logging configurations, but we hope these examples provide a useful starting point for some common needs.
Just your app
This is a logging configuration which only attaches the “scout” handler to a custom logger: your_app
from logging.config import dictConfig
from scout_apm_python_logging import ScoutOtelHandler
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
},
},
"handlers": {
"scout": {
"level": "DEBUG",
"class": "scout_apm_python_logging.ScoutOtelHandler",
"service_name": "your-service-name",
},
"console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "standard",
},
},
"loggers": {
"your_app": {
"handlers": ["scout"],
"level": "DEBUG",
"propagate": True # True by default
},
},
"root": {
"level": "WARNING",
"handlers": ["console"],
},
}
dictConfig(LOGGING_CONFIG)
Key Points
- The
root
logger is set toWARNING
level and only uses the console handler, effectively excluding other libraries' logs from Scout. - Propagation means that all logs will also be handled by the root handlers, so in this case: output to the console.
- Adding the
scout
handler to the root would mean thatWARNING
orERROR
logs from all libraries would send to Scout. In this case, you could add the console handler toyour_app
and setpropagate
toFalse
to not double-report.
Excluding specific libraries
Maybe you’re okay with getting most logs from various libraries, but want to reduce noise from especially verbose libraries.
### Initial configuration the same as above
...
"loggers": {
"your_app": {
"handlers": ["console", "scout"],
"level": "DEBUG",
},
"urllib3": {
"level": "WARNING",
},
"boto3": {
"level": "WARNING",
},
},
"root": {
"level": "INFO",
"handlers": ["console", "scout"],
},
}
Key Points:
- Explicitly configuring loggers for known libraries (like urllib3 and boto3 in the example) gives you fine-grained control over their log levels.
- Libraries that are not explicitly configured will inherit from the root logger, which is set to INFO level in this example. Your application’s logger is set to DEBUG level, ensuring you capture all necessary details from your own code.
This configuration allows you to:
- capture detailed logs from your application
- reduce noise from verbose libraries by setting their levels higher
- catch logs from unconfigured libraries at the info level
Installation: The Manual Way
Scout’s logging solution is based on OpenTelemetry and the OTLP protocol. If you are running an Otel Collector, we can be an endpoint of a logs pipeline. You can theoretically send us log records from any type of service and we will group them according to the provided Ingest Key within Scout’s logging UI. If using the collector directly, you can add your scout_logs_ingest_key
to the headers of the OTLP exporter as x-scout-key
, or as an attribute in code where you configure OpenTelemetry as scout.key
.
# otel-collector-config.yaml
receivers:
...
filelog:
include: [ /var/log/*.log ]
...
exporters:
otlp:
endpoint: https://otlp.scoutotel.com:4317
headers:
x-scout-key: aaaa-1111-aaaa-1111 # Provided in Scout settings
...
service:
pipelines:
...
logs:
receivers: [otlp, filelog]
processors: []
exporters: [otlp]
FAQ
How does this work?
For Ruby applications, our logging agent will download and configure the Otel Collector and run it on your server. The Collector will then collect logs from your application (either specified as explicit files or automatically created by our agent) and send them to Scout.
For Python applications, we provide a custom logging handler which wraps the OpenTelemetry Python SDK. This handler integrates with your existing logging setup, and sends logs directly to Scout without requiring the Otel Collector to be installed separately.
The logs will be available in the Scout UI for you to search and filter. For Rails, we will automatically set a custom log formatter to include additional context in the logs sent to us. Your original logs will not be altered. Both the Ruby and Python Logging agents are open source and available on Github.
What is the retention period on this log data?
Scout retains your log data for 14 days. If you require longer-term storage, please let us know as we would like to create options for our customers, but at this point you will need to also send them to an alternative location.