Production Rails Tuning with Passenger: PassengerMaxProcesses

Our co-author today is Jesse Newland, CTO of RailsMachine. Jesse keeps RailsMachine customers up and running and troubleshoots their toughest problems. We’re pleased to have him share some of his expertise on Phusion Passenger tuning.

Say your Rails application is running in production and it’s getting good traffic. Performance isn’t as good you would like. You’ve already determined that your database is not the bottleneck. What’s your next move?

There is a good chance that PassengerMaxPoolSize to be adjusted. PassengerMaxPoolSize specifies how many instances of your application Passenger will spin up to service incoming requests. If you were running Mongrels back in the day, PassengerMaxPoolSize is equivalent to the number of mongrels you configured for your app. The value PassengerMaxPoolSize a major bearing on your application’s performance.

The Basic Rule of Thumb

A larger PassengerMaxPoolSize value yields better throughput but uses more memory.

PassengerMaxPoolSize low and you may be constrained by the number of application instances on hand to service requests. Incoming requests get queued up behind running ones, and application response slows down accordingly.

On the other hand, PassengerMaxPoolSize high (relative to the amount of RAM you have) and Apache will start gobbling up swap space and begin to sputter. Consume too much swap, and your whole system can grind to a halt.

You need to find Passenger’s “happy place” where memory isn’t being wasted nor is it being over-consumed.

Scenario 1: Too few Passenger threads

RAM doesn’t do you any good just sitting there. A simple way to check your memory situation is free -ml:

~ $free -ml
             total       used       free     shared    buffers     cached
Mem:          1024        781        242          0         81        389

For illustration purposes, here’s the two-week history of memory usage as captured by Scout:

There is definitely some headroom here—memory usage is around half with very few spikes, leaving us around 500MB free.

Before we change settings, check that the number of Apache processes is actually a limiting factor. Passenger makes this easy to check with passenger-status. You’ll see the last line:

   Waiting on global queue: 3

There’s our smoking gun—requests are backed up while we have memory to spare! Time to bump PassengerMaxPoolSize

Sidebar: If you don’t have Passenger’s global queueing turned on, there is (unfortunately) no way to see if requests are backed up.

How much to bump up PassengerMaxPoolSize?

So we already know that that we have around 500MB sitting around. Two more key pieces of information we need:

1. What’s your current PassengerMaxPoolSize value?

PassengerMaxPoolSize is set in the Apache configuration, typically located /etc/apache2/apache2.conf The default value is 6.

2. How much memory does an instance of your application generally use?

It’s easy to find this out with passenger-memory-stats. Part of the output will look like this:

--------- Passenger processes ----------
PID    Threads  VMSize    Private  Name
6120   1        152.4 MB  26.2 MB  Rails: /var/www/apps/wifi/current
637    1        165.1 MB  32.1 MB  Rails: /var/www/apps/wifi/current

That “Private” column is what we’re interested in. In this case, application instances take up around 30MB each. Given that we have around 500MB free, we can safely add 10 more instances to the pool. So, we’ll PassengerMaxPoolSize 4 up to 14.

In /etc/apache2/apache2.conf:

@PassengerMaxPoolSize@ 14

Make sure you reload Apache to take in the new settings. The command depends on your configuration; sudo /etc/init.d/apache2 reload Ubuntu. If you’re wondering if reload is sufficient (as opposed to a full restart of Apache)— reload is safe, and it ensures all running requests are processed before terminating.

After putting the new configuration in place, monitor your server to ensure everything keeps running correctly. Keep an eye on free memory available, the amount of swap used, the number of passenger processes, and Passenger’s global queue length. You should look for:

Scenario 2: You’re using up all your RAM and running in

It’s easy to determine if you’re out of memory and using swap. A rule of thumb:

Any swap usage is bad.

If your server is using swap, your application will perform badly, end of story. So how do you know? vmstat 2 your server. The ‘2’ means it will refresh every two seconds. that is good looks like this:

~ $vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0 139952 174176  81292 398068    1    0     6     7    0    0  1  0 97  0
 0  0 139952 174120  81292 398092    0    0     0     0 1133  382  0  0 99  0
 0  0 139952 174120  81292 398092    0    0     0    28 1087   32  0  0 99  0
 0  0 139952 174120  81292 398092    0    0     0     0 1155   25  0  0 99  0

You can ignore the first line in the output (in this case, the line with the 1 under si). The ‘si’ and ‘so’ columns are swap-in and swap-out, respectively, and show writes and reads to swap. If there’s activity there, especially in ‘so’, you’re using swap instead of RAM, and things will be slower. See the Analyze your Server's "Swappiness" section in this helpful article over at Rails Machine for more details.

What to Do?

If you’re running in , you basically have two options:

  1. Add more memory. If you’re running VM’s, this should be straightforward.
  2. Reduce the memory consumption of processes on this machine.

It is unlikely that simply reducing PassengerMaxPoolSize this machine will solve your problems. If Passenger is spinning up this many instances, you have too much traffic for your current setup.

Some options for freeing up memory for better performance include:

How RailsMachine uses Scout to determine PassengerMaxPoolSize

If you’re just looking at the current memory usage, you’re ignoring past memory spikes. You may allocate to much memory to Passenger, exhaust the swap memory space, and shut down the server. Scout makes it easier to tune Passenger because:

  1. you can view the long-term history and volatility of memory usage
  2. the Phusion Passenger Monitor plugin automatically tracks the number of Passenger processes and total RAM used by Passenger processes.

The chart above shows the total RAM used and the RAM used by Passenger processes over the past week. We an idea of volatility (November 15 had a large memory spike). Since this server has 1 GB of RAM, it looks like we have headroom for more Passenger processes (another 100 MB could safely be used by Passenger based on the chart).


The number of processes Passenger will spin up has a significant bearing on your application’s performance. The setting is in your Apache configuration file PassengerMaxPoolSize Generally speaking, you want to fill up as much RAM as possible with passenger processes without utilizing swap in order to maximize performance and throughput.

Adjust PassengerMaxPoolSize so that there is little free RAM left and no swap being used.

As you’re adjusting configurations, allow for other processes you may have on the machine: database? full-text search servers? Mail servers? cron jobs? periodic reporting? Keep in mind that some of these (especially and reporting) don’t take up memory all the time—when they do run, make sure you don’t crowd them out. Finally, you may want to leave some room for ad-hoc needs: most people log into a production box and fire up script/console occasionally. You don’t want to be running so tight on memory that a console session borks the machine.