Elixir Configuration

Configuration Options

Code based

The Elixir agent can be configured via the config/scout_apm.exs file. A config file with your organization key is available for download as part of the install instructions. A application name and key is required:

config :scout_apm, 
name: "Your App", # The app name that will appear within the Scout UI 
key: "YOUR SCOUT KEY" 

Environment Variables

Alternately, you can also use environment variables of your choosing by formatting your configuration as a tuple with :system as the first value and the environment variable expected as the second. To configure Scout via enviroment variables, uppercase the config key and prefix it with SCOUT_. For example, to set the key configuration via environment variables use: export SCOUT_KEY=YOURKEY

config :scout_apm, 
name: { :system, "SCOUT_NAME" }, 
key: { :system, "SCOUT_KEY" } 

Common Configurations

Setting Name Description Default Required
key The organization API key. Yes
name Name of the application (ex: ‘Photos App’). Yes
monitor Whether monitoring data should be reported. true Yes
log_level The logging level of the agent. Possible values: :debug, :info, :warn, and :error. :info No

Additional Configurations

Setting Name Description Default Required
host The protocol + domain where the agent should report. https://scoutapm.com No
revision_sha The Git SHA associated with this release. See docs No
ignore An array of URL prefixes to ignore in the Scout Plug instrumentation. Routes that match the prefixed path (ex: ['/health', '/status']) will be ignored by the agent. [] No
hostname The host registered with the Core Agent. No

Core Agent Configurations

Setting Name Description Default Required
core_agent_dir Path to create the directory which will store the Core Agent. /tmp/scout_apm_core No
core_agent_download Whether to download the Core Agent automatically, if needed. True No
core_agent_launch Whether to start the Core Agent automatically, if needed. True No
core_agent_permissions The permission bits to set when creating the directory of the Core Agent. 700 No
core_agent_full_name The release/url we look for when downloading the core-agent. Auto-detected No
core_agent_triple If you are running a MUSL based Linux (such as ArchLinux), you may need to explicitly specify the platform triple. E.g. x86_64-unknown-linux-musl Auto detected No
core_agent_log_level The log level of the core agent process. This should be one of: "trace", "debug", "info", "warn", "error". This does not affect the log level of the NodeJS library. To change that, directly configure logging as per the documentation. "info" No
core_agent_log_file The log file for the core agent process "/path/to/your/log/file" No

Environments

It typically makes sense to treat each environment (production, staging, etc) as a separate application within Scout. To do so, configure a unique app name for each environment. Scout aggregates data by the app name.

An example:

# config/staging.exs

config :scout_apm,
  name: "YOUR APP - Staging"

Instrumenting Common Libraries

We’ve collected best practices for instrumenting common transactions and timing functions below. If you have a suggestion, please share it. See our custom instrumentation quickstart for more details on adding instrumentation.

Phoenix Channels

Web or background transactions?

Naming channel transactions

Provide an identifiable name based on the message the handle_out/ or handle_in/ function receives.

An example:

defmodule FirestormWeb.Web.PostsChannel do
  use FirestormWeb.Web, :channel
  import ScoutApm.Tracing

  # Will appear under "Web" in the UI, named "PostsChannel.update"
  @transaction_opts [type: "web", name: "PostsChannel.update"]
  deftransaction handle_out("update", msg, socket) do
    push socket, "update", FetchView.render("index.json", msg)
  end
end

Plug Chunked Response (HTTP Streaming)

In a Plug application, a chunked response needs to be instrumented directly, rather than relying on the default Scout instrumentation Plug. The key part is to start_layer beforehand, and then call before_send after the chunked response is complete.

def chunked(conn, _params) do
  # The "Controller" argument is required, and should not be changed. The second argument is the
  # name this endpoint will appear as in the Scout UI. The `action_name` function determines this
  # automatically.
  ScoutApm.TrackedRequest.start_layer("Controller", ScoutApm.Plugs.ControllerTimer.action_name(conn))

  conn =
    conn
    |> put_resp_content_type("text/plain")
    |> send_chunked(200)

  {:ok, conn} =
    Repo.transaction(fn ->
      Example.build_chunked_query(...)
      |> Enum.reduce_while(conn, fn data, conn ->
        case chunk(conn, data) do
          {:ok, conn} ->
            {:cont, conn}

          {:error, :closed} ->
            {:halt, conn}
        end
      end)
    end)

  ScoutApm.Plugs.ControllerTimer.before_send(conn)

  conn
end

Then have the default instrumentation ignore the endpoint’s URL prefix (since it is manually instrumented now). See the ignore configuration for more details.

config :scout_apm,
  name: "My Scout App Name",
  key: "My Scout Key",
  ignore: ["/chunked"]

GenServer calls

It’s common to use GenServer to handle background work outside the web request flow. Suggestions:

An example:

defmodule Flight.Checker do
  use GenServer
  import ScoutApm.Tracing

  # Will appear under "Background Jobs" in the UI, named "Flight.handle_check".
  @transaction_opts [type: "background", name: "check"]
  deftransaction handle_call({:check, flight}, _from, state) do
    # Do work...
  end
end

Task.start

These execute asynchronously, so treat as a background transaction.

Task.start(fn ->
  # Will appear under "Background Jobs" in the UI, named "Crawler.crawl".
  ScoutApm.Tracing.transaction(:background,"Crawler.crawl") do
    Crawler.crawl(url)
  end
end)

Task.Supervisor.start_child

Like Task.start, these execute asynchronously, so treat as a background transaction.

Task.Supervisor.start_child(YourApp.TaskSupervisor, fn ->
  # Will appear under "Background Jobs" in the UI, named "Crawler.crawl".
  ScoutApm.Tracing.transaction(:background,"Crawler.crawl") do
    Crawler.crawl(url)
  end
end)

Exq

To instrument Exq background jobs, import ScoutApm.Tracing, use deftransaction to define the function, and add a @transaction_opts module attribute to optionally override the name and type:

defmodule MyWorker do
  import ScoutApm.Tracing

  # Will appear under "Background Jobs" in the UI, named "MyWorker.perform".
  @transaction_opts [type: "background"]
  deftransaction perform(arg1, arg2) do
    # do work
  end
end

Absinthe

Requests to the Absinthe plug can be grouped by the GraphQL operationName under the “Web” UI by adding this plug to your pipeline.

HTTPoison

Download this [Demo.HTTPClient module](https://gist.github.com/itsderek23/048eaf813af4a1a31a219d75221eb7b7 (you can rename to something more fitting) into your app’s /lib folder, then alias Demo.HTTPClient when calling HTTPoison functions:

defmodule Demo.Web.PageController do
  use Demo.Web, :controller
  # Will route function calls to `HTTPoision` through `Demo.HTTPClient`, which times the execution of the HTTP call.
  alias Demo.HTTPClient

  def index(conn, _params) do
    # "HTTP" will appear on timeseries charts. "HTTP/get" and the url "https://cnn.com" will appear in traces.
    case HTTPClient.get("https://cnn.com") do
      {:ok, %HTTPoison.Response{} = response} ->
        # do something with response
        render(conn, "index.html")
      {:error, %HTTPoison.Error{} = error} ->
        # do something with error
        render(conn, "error.html")
    end
    HTTPClient.post("https://cnn.com", "")
    HTTPClient.get!("http://localhost:4567")
    render(conn, "index.html")
  end
end

MongoDB Ecto

Download this example MongoDB Repo module to use inplace of your existing MongoDB Repo module.