After Twitter moved from Ruby to Scala in 2009, the story was born that Ruby on Rails can’t scale. The story goes that it lacks robustness, is a memory hog, and lacks the concurrency features you need to grow an application.
This has been the prevailing wisdom for over a decade. And then along came Shopify, showing that, as Lutke says, Ruby on Rails is a framework that can process billions of events per day and evidently does scale.
Ruby on Rails is an excellent candidate for scaling. It follows the “convention over configuration” principle, which means a lot of decisions are made for you by the framework. This standardization leads to more maintainable and predictable code, making it easier to grow the development team and the codebase.
Rails has a mature and extensive ecosystem with a wide array of gems (libraries) available for almost any feature or functionality you might need. This speeds up development and provides well-tested solutions to common problems, which is crucial for scaling applications efficiently.
And Rails applications can be containerized and deployed in orchestrated environments like Kubernetes, designed to handle scaling seamlessly.
It is the last one we want to concentrate on today—how to take your Rails app, deploy it via containerization, and then orchestrate deployment with Kubernetes.
Building Docker Containers in Ruby on Rails
If you are reading this, you have a Ruby on Rails app up and running, so we’ll skip straight to the helpful stuff. We’re going to Containerize our application using Docker for Ruby. If you don’t already have Docker installed, head to Docker and install Docker and Docker Hub.
With that done, we can start with what is known as “configuration as code.” Configuration as code means that all the environment and infrastructure settings needed to run your Rails application are defined in code files rather than being set up manually.
The advantages of Configuration as Code are:
– Reproducibility: Your environment can be replicated across different machines or deployment environments (like staging and production), reducing “it works on my machine” problems.
– Version Control: Just like your application code, your environment and infrastructure setup can be version-controlled, allowing you to track changes over time and revert to previous configurations if necessary.
– Collaboration: It simplifies collaboration among team members and even across teams (like development, QA, and operations) as the configuration is explicit and versioned.
– Automation: It plays well with automated deployment pipelines, making continuous integration and continuous deployment (CI/CD) processes more streamlined.
In the context of Dockerizing a Ruby on Rails application, this involves creating a Dockerfile
and possibly a docker-compose.yml
file.
The Dockerfile
defines your application environment. This file contains a set of instructions to build your Docker image. A typical Dockerfile
for a Rails app may look something like this:
# Base Image
FROM ruby:3.0
# Install dependencies
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
# Set work directory
WORKDIR /my-rails-app
# Copy the Gemfile and Gemfile.lock
COPY Gemfile /my-rails-app/Gemfile
COPY Gemfile.lock /my-rails-app/Gemfile.lock
# Bundle install
RUN bundle install
# Copy the main application
COPY . /my-rails-app
# Expose port 3000 to the Docker host
EXPOSE 3000
# Start the main process
CMD ["rails", "server", "-b", "0.0.0.0"]
This Dockerfile is a script used by Docker to create an image for a Ruby on Rails application. Here’s an explanation of each part:
1. Base Image: FROM ruby:3.0
– Specifies the base image to use for the container. Here, it’s using version 3.0 of the official Ruby Docker image.
2. Install Dependencies: RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
– Updates the package lists for upgrades and new package installations. Quietly installs Node.js and PostgreSQL client, which are common dependencies for Rails applications (Node.js for the asset pipeline and PostgreSQL client for database interactions).
3. Set Work Directory: WORKDIR /my-rails-app
– Sets the working directory inside the container to /my-rails-app. Future commands will run in this directory.
4. Copy Gemfile and Gemfile.lock:
– COPY Gemfile /my-rails-app/Gemfile
– COPY Gemfile.lock /my-rails-app/Gemfile.lock
– Copies the Gemfile and Gemfile.lock from your project to the /my-rails-app directory in the container. These files define the Ruby gem dependencies for your application.
5. Bundle Install: RUN bundle install
– Runs bundle install to install the Ruby gems specified in the Gemfile.
6. Copy the Main Application: COPY . /my-rails-app
– Copies the rest of your application code into the /my-rails-app directory in the container.
7. Expose Port 3000: EXPOSE 3000
– Informs Docker that the container listens on port 3000 at runtime. This is the default port for Rails applications.
8. Start the Main Process: CMD [“rails”, “server”, “-b”, “0.0.0.0”]
– Sets the default command to run when the container starts. In this case, it starts the Rails server and binds it to all IP addresses inside the container (0.0.0.0).
You then run docker build to create an image of your application.
docker build -t my-rails-app .
This image includes everything your application needs to run. “Everything” here means the application’s code, runtime, system tools, system libraries, and any other dependencies required to run the application.
To run your Rails application, all you need to do is run:
docker run -p 3000:3000 my-rails-app
The other config-as-code file we want is a docker-compose.yml
file for orchestrating services. With Compose, you use a YAML file to configure your application’s services, networks, and volumes. Then, with a single set of commands, you create and start all the services from your configuration. This simplifies managing and orchestrating multiple containers, especially for applications that require several interconnected components (like a web server, a database, and a cache).
Here’s ours:
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/my-rails-app
ports:
- "3000:3000"
depends_on:
- db
In this example, you have a web
service for your Rails app and a db service for your PostgreSQL database.
For the web service, it builds the Docker image using the Dockerfile
above, then overrides the default command to start the Rails server, binding it to port 3000 on all network interfaces inside the container. It neatly mounts the current directory (where the Compose file is) into /my-rails-app
in the container, allowing for live code updating without needing to rebuild the image. Then, the file maps port 3000 of the host to port 3000 of the container, making the Rails application accessible from the host machine.
For the database, it maps a volume from ./tmp/db
on the host to /var/lib/postgresql/data
inside the container. This is used for database data persistence.
Now, to start both the web and db service, run:
docker-compose up
By using Docker and the principle of configuration as code, you ensure a consistent environment for your Ruby on Rails application.
Orchestrating Your Containers
Docker Compose helps you orchestrate locally on a small scale, but if you think big, you want to marry your Docker containers with Kubernetes.
Using Kubernetes container orchestration for a Ruby on Rails application provides several significant benefits, particularly in deployment, scaling, and managing the application lifecycle.
– Kubernetes allows for automated deployment and scaling of containerized applications. It can automatically scale the application up or down, depending on demand.
– Kubernetes can also restart containers that fail, replace and reschedule containers when nodes die, kill containers that don’t respond to user-defined health checks, and not advertise them to clients until they are ready to serve.
– Kubernetes can expose a container using the DNS name or its IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic to stabilize the deployment.
Scaling a Ruby on Rails application using Kubernetes can be complex, especially when considering advanced scenarios. It involves increasing the number of replicas and ensuring the infrastructure can handle the increased load, managing database connections, implementing load balancing, and potentially adding more services.
Let’s walk through an example showing how to scale your application. Again, Kubernetes uses a configuration-as-code paradigm. We need two essential files for deployment to Kubernetes: a deployment file and a service file. These serve different but complementary purposes. They are both manifest files written in YAML format that define various aspects of an application running in a Kubernetes cluster.
A Deployment file defines and controls how an application’s containers should run in a Kubernetes cluster. It describes the desired state of Pods, the smallest deployable units in Kubernetes that encapsulate containers.
A Service file defines a logical set of Pods (typically defined by a Deployment) and a policy to access them. Essentially, Services enable network access to a set of Pods, often providing a stable IP address and DNS name by which the Pods can be accessed.
So your deployment manifest focuses on managing Pods, ensuring they run as specified. It handles the deployment, scaling, and updating of Pods. Your service manifest provides a consistent endpoint for accessing the Pods managed by a deployment. It handles networking and exposure of the application, either internally within the cluster or externally.
Here is a rails-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-rails-app
spec:
replicas: 3
selector:
matchLabels:
app: my-rails-app
template:
metadata:
labels:
app: my-rails-app
spec:
containers:
- name: my-rails-app
image: yourusername/my-rails-app:latest
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
value: postgres://username:password@postgres-service:5432/mydatabase
This file’s most crucial part is the spec defining the desired state and deployment characteristics. This specifies that three instances (replicas) of the rails-app
Pod should be running at all times. Within each Pod, we want to use our my-rails-app
container, so that is specified in containers. image: yourusername/rails-app:latest
sets the Docker image to use for the container where you replace yourusername/rails-app:latest
with the path to your Docker image in the registry.
The env part of the file is used to set environment variables within the container. Here, we only have one named DATABASE_URL. value: postgres://username:password@postgres-service:5432/mydatabase
provides the value database URL connection string.
Next is our rails-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: rails-service
spec:
selector:
app: rails-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
We use these two files to deploy our application on a Kubernetes cluster. Here, we’ll use Minikube, which allows you to deploy Kubernetes clusters locally. Still, in production, you would likely use a hosted service such as Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS).
You can learn how to install and get started with Minikube here. For our application, we have to apply our configurations:
kubectl apply -f rails-deployment.yaml
kubectl apply -f rails-service.yaml
The first command tells Kubernetes to create the resources defined in the rails-deployment.yaml
file, which includes creating a Deployment that manages your Rails application Pods. The second command creates a Service based on the rails-service.yaml
file, which makes your Rails application accessible within the Kubernetes cluster.
We can then use the minikube service
command to access it:
minikube service rails-service
This will open the Rails application in your default web browser.
Other helpful commands are:
– Check the status of your deployment:
kubectl get deployments
– Check the status of your pods:
kubectl get pods
– Check the status of your service:
kubectl get services
If you change your Rails application, you must rebuild your Docker image. Then, you can update your deployment by re-applying the YAML files or using specific Kubernetes commands for rolling updates.
Scaling and Adding Services
The above example is a basic deployment onto Kubernetes. But the idea of containerization and Kubernetes deployment is orchestration. Container orchestration allows you to manage complex tasks and workflows such as deployment scaling, networking between containers, load balancing, and health monitoring of containers efficiently and with minimal manual intervention. This orchestration capability ensures that the desired state of the deployment is maintained, such as the number of replicas of a pod, network rules, and persistent storage, in a dynamic environment.
Persistent Storage
Let’s first set up some persistent storage with a database service. If you’re using PostgreSQL, a postgres-deployment.yaml
will look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
This specifies using the postgres:latest
Docker image and exposes PostgreSQL’s default port, 5432. The deployment is labeled app: postgres
, which is used to identify the pods created by this deployment.
It includes a volume mount, /var/lib/postgresql/data
, linked to a postgres-storage volume. This volume is backed by a Persistent Volume Claim (PVC) named postgres-pvc
, which will be defined separately in the cluster. The PVC allows data stored in the PostgreSQL database to persist across pod restarts and crashes, making the database stateful. This setup ensures a running instance of PostgreSQL is managed by Kubernetes with persistent storage.
Let’s set up our postgres-pvc.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
This defines a Kubernetes PersistentVolumeClaim (PVC), a request for storage for other resources like Pods. This PVC requests a storage volume with a single access mode, ReadWriteOnce
, meaning a single node can mount the volume as read-write. It specifies a storage request of 10Gi
(10 gigabytes), indicating the amount of storage space it requires.
This PVC is used to provision persistent storage in a cluster, ensuring that data can be retained and survive the restarting or rescheduling of Pods, such as in the case of a database like PostgreSQL.
Finally, we need our postgres-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
This service is designed to provide a consistent network endpoint for accessing the PostgreSQL database running in Pods labeled with app: postgres
. The service routes network traffic using TCP protocol to port 5432, the default port used by PostgreSQL. This setup allows other components in the Kubernetes cluster to reliably connect to the PostgreSQL database using the service, abstracting the actual Pod instances and their lifecycle.
As with our main deployment and service, we then just apply these files:
kubectl apply -f postgres-deployment.yaml
kubectl apply -f postgres-pvc.yaml
kubectl apply -f postgres-service.yaml
Autoscaling in Kubernetes
We can implement a Horizontal Pod Autoscaler (HPA) to automatically scale the number of pod replicas based on CPU utilization or other selected metrics.
HPA Configuration (rails-hpa.yaml
):
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: rails-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-rails-app
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
Based on CPU utilization, this is configured to scale the number of pods in the my-rails-app
deployment automatically (as specified in scaleTargetRef
). The HPA adjusts the number of pod replicas between a minimum of 3 and a maximum of 10. It targets a CPU utilization of 70%, meaning it will increase the number of replicas if the average CPU utilization across all pods exceeds 70% and decrease it when the usage falls below this threshold. This ensures the my-rails-app
deployment scales dynamically in response to workload changes, maintaining performance and resource efficiency.
Kubernetes Monitoring and Logging
Above, we have manually configured each of our services. But this isn’t the only way to deploy services with Kubernetes. Helm simplifies this process by managing packages of pre-configured Kubernetes resources, known as charts. It is like gem, apt, or npm for Kubernetes.
Here, we will deploy Prometheus and Grafana to monitor our clusters via Helm.
First, install Helm on your local machine. Helm provides installation guides for various platforms on their official website.
Then, we need to deploy Prometheus. Add the Prometheus Helm Chart repository using:
helm repo add prometheus-community
<https://prometheus-community.github.io/helm-charts>
helm repo update
You can install Prometheus with a default configuration using the following command:
helm install prometheus prometheus-community/prometheus
As with other services, you can check if the Prometheus pods are running using:
kubectl get pods
You should see the pods for the Prometheus server, Alertmanager, and node-exporter running.
By default, the Prometheus server is not exposed externally. To access it via a web browser, you can use port-forwarding:
kubectl port-forward deploy/prometheus-server 9090
Then, access Prometheus by navigating to http://localhost:9090 on your browser.
Deploying Grafana is a similar process. First, add the Grafana Helm Chart repository:
helm repo add grafana <https://grafana.github.io/helm-charts>
helm repo update
Deploy Grafana using Helm:
helm install grafana grafana/grafana
Finally, check if the Grafana pod is running:
kubectl get pods
You should see the pod for Grafana running. Grafana also needs port-forwarding to be accessed externally:
kubectl port-forward deploy/grafana 3000
Then, access Grafana by navigating to http://localhost:3000
on your browser.
You can then log into Grafana using the default username admin
. To get the auto-generated password, run:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Once logged in, add Prometheus as a data source via the Grafana UI. The URL for Prometheus would typically be http://prometheus-server:9090
, assuming both Prometheus and Grafana are deployed in the same namespace.
You now have Prometheus and Grafana running in your Kubernetes cluster. Prometheus is collecting metrics, and Grafana is ready for you to create dashboards for visualizing those metrics.
Scaling Ruby on Rails
This is still just the beginning. From here, you can explore advanced Kubernetes features such as stateful sets for databases, autoscalers for dynamic scaling, and ingress controllers for efficient traffic management. Also, consider adopting CI/CD pipelines for streamlined deployments.
The beauty of Kubernetes and containerization is the ease with which you can scale, add the services you need, and continue managing your application. This makes Kubernetes and Ruby a compelling combination for modern web applications. As your application grows, Kubernetes offers the flexibility to adapt to changing demands, while Ruby on Rails provides a robust and productive framework for rapid development.
Together, Kubernetes and Ruby allow for creating highly scalable, resilient applications that can efficiently handle increased traffic and workloads, making this duo a compelling choice for developers aiming to build enterprise-level applications.
hayley@scoutapm.com