Wrangling Complex Rails apps with Docker Part 2 : Creating a docker-compose configuration

In part 1 of this series, we created a Dockerfile that encompasses the Rails portion of our app, now we need to connect it with other services. docker-compose is a lightweight orchestration tool that's perfect for this purpose. I don't recommend using docker-compose as a production deployment tool, but it works very well for local development.

Docker compose is perfect for Rails applications that have a lot of moving parts. As a consultant, it can be difficult to get a development machine set up with a client's project that has many moving parts which depend on very specific versions of various software packages. In a perfect world we would be running the latest version of all of these packages, but as we know, software development is not always perfect or ideal. Using docker-compose is a great solution to this dilemma, and doesn't require the developer to pollute their system with many versions of outdated software.

Containerization can untangle the dependency knot created by the need to install old software and old libraries alongside newer versions of the same software. Ideally, this would buy you time to upgrade the application to use the latest and greatest software versions. Containerization isn't a perfect solution, as it has its warts and foibles. It requires a developer to learn new commands to accommodate their development workflow, but it's worth when working on projects similar to the kind I've described.

Given that we run one container per service, our containers (or services in docker-compose) would be our puma server, webpack dev server, postgres, and for larger applications - redis, and sidekiq. We could technically create all of these containers on the command line, but that would be very tedious. docker compose was created to take away the tedium of container creation, specifying exactly how we want to create services and what shared resource relationships exist between those services.

There are two ways to develop Rails applications with docker compose. Running everything inside of Docker - rails, rubocop, yarn, etc. The other way is to run supporting services like postgres, memcache, redis, etc. inside of Docker while running your tooling (rails, yarn, and friends) outside of Docker.

If you're on macOS, running rails and webpack outside of Docker is an attractive option because running inside of Docker on a mac can be slow. If you're using WSL or Linux, running everything inside of Docker imposes no performance penalty. My advice would be to try running everything inside of Docker first regardless of what OS that you're using, and if things seem too slow, try the hybrid approach that I mention later on in the post.

Let's create a Rails application with the following architecture:

  • Rails (puma) on port 3000,
  • Webpack dev server listening on port 3035
  • PostgreSQL on port 5432

Docker compose can manage all of the important parts of your development application's infrastructure:

  • Networks : Virtual networks in Docker to segment parts of the application off from others ( frontend and backend )
  • Services: Running daemons and servers, such as puma and webpack-dev-server
  • Volumes: - Persistence and caching so that when your containers go away, your data does not.

Things to keep in mind when creating a docker compose configuration

  • Create a restart policy for your containers. If one crashes, it will be restarted automatically by Docker.
  • Declare volumes for any data that you want to survive a restart. Very important for databases, as Docker containers are meant to be ephemeral by design.
  • Any data that's generated by your application that's not in a volume will not survive a restart. This tripped me up when I was new to Docker. This means file uploads, etc.
  • Volumes can also be used to cache data (for OSes with slow IO between Docker and host OS). node_modules and your Gem directory would be good candidates to be volumized.
  • Networks allow separation between service communications. Declaring networks is considered a Docker best practice.
  • YAML anchors can be used to DRY out your compose file.

Running Rails inside of Docker

The advantage of running everything inside of Docker is that all of your dependencies are captured in the Dockerfile and compose configuration. Handing the application off to another teammate is a breeze (especially if they are familiar with Docker). The disadvantages are the aforementioned slowness, and the tedium of having to append docker-compose exec to all of your shell commands.

A few aliases can help alleviate this pain. The most common place to put these are in your shell's startup file. .bashrc or .bash_profile for bash, and .zshrc for zsh. If you're using a different shell or a shell framework, consult the documentation for the best place to put aliases.

alias dcb="docker-compose build"
alias dce="docker-compose exec"
alias dcr="docker-compose run"
alias dcu="docker-compose up"
alias dcd="docker-compose down"
alias dcl="docker-compose logs"
alias dcp="docker-compose ps"

Here's an annotated docker-compose.yml file that includes webpack-dev-server and a persistent testing container:

Side Note: One thing that may look new to those experienced with Docker is the x-app-volumes: declaration in the compose file. These are extension fields which are supported in a compose file that's version 3.4 and up.

version: '3.8'

#
# x-** are dummy mapping directives.
# only used for yaml anchors to
# DRY out the compose file
#
x-app-volumes: &volumes
  volumes:
    # mounts the current working directory into the Docker container
    - .:/app
    # caching volumes (for performance)
    - gem_cache:/bundle/vendor
    - node_modules:/app/node_modules
    - packs:/app/public/packs
    - packs_test:/app/public/packs-test

services:
  #
  # External Services
  #
  db:
    # At the time of this writing Postgres 13 is the latest.
    image: postgres:13
    restart: on-failure

    volumes:
      - pg_data:/var/lib/postgresql/data
      # allows us to dump the database somewhere e.g.:
      # pg_dump -U $POSTGRES_USER -F t $POSTGRES_DB > /backups/$POSTGRES_DB-$(date +%Y-%m-%d).tar'
      - ./db/dumps:/backups

    environment:
      # Postres docker containers are configured via env vars.
      - POSTGRES_PASSWORD=letmein
      - POSTGRES_USER=myuser
      - POSTGRES_DB=appdb

    networks:
      # The "backend" network are supporting services that 
      # are not the app server, or not part of the HTTP layer.
      - backend

    ports:
      # I like to expose "system" services on a different port, 
      # in case there is already an instance of pgsql running
      # on the host machine. By doing this, postgres can be reached 
      # from localhost at port 5433.
      - '5433:5432'
  #
  # App services
  #
  web:
    # this command starts a rails server and 
    # listens on all interfaces (0.0.0.0)
    command: bash -c "rm -f /app/tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"

    restart: on-failure

    # We can add an `environment:` yaml key here as well, 
    # but I prefer using an env file
    # to keep things cleaner. 
    env_file: .env.docker.development

    # This builds the image as "appimage" so that we can 
    # refer to it later in this file.
    image: appimage
    build:
      context: ./
      dockerfile: Dockerfile

    <<: *volumes
    networks:
      - frontend
      - backend
    ports:
      - '3000:3000'

    # These declarations allow a pry session to be
    # attatched if desired.
    tty: true
    stdin_open: true

  webpacker:
    command: ./bin/webpack-dev-server
    restart: on-failure
    env_file: .env.docker.development

    # This is the app image we built in 'web'
    image: appimage

    # It's allowed to have both an .env file AND environment defined. 
    # when in doubt, refer to this:
    # https://docs.docker.com/compose/environment-variables/ 
    # This bit of configuration is critical
    # to the proper operation of webpack-dev-server, 
    # so I like to define it here.
    environment:
      - WEBPACKER_DEV_SERVER_HOST=0.0.0.0

    <<: *volumes

    # We don't need to connect to the DB at all here, 
    # so we just add the frontend network
    networks:
      - frontend

    # allows webpack-dev-server to be accessed at localhost:3035
    ports:
      - '3035:3035'

  #
  # Testing
  # To use: docker-compose exec rspec '/path/to/spec'
  #
  test:
    image: appimage

    # This does two things: Allows our test container to be persistent, 
    # and loads the test boilerplate # for faster test runs. 
    # You need the "spring-commands-rspec" installed
    # in order to make this work (assuming you're using rspec)
    command: bin/spring server

    # This is a different env file, for testing only.
    env_file: .env.docker.test

    # We don't need to allow access to the app at all during testing,
    # so it's a backend service
    networks:
      - backend

    <<: *volumes

# Our network declarations, used in 
networks:
  frontend:
  backend:

volumes:
  pg_data:
  packs:
  packs_test:

We have to modify our application just a bit in order to use environment variables for configuration. This is a good idea anyway.

config/database.yml

default: &default
  url: <%= ENV["DATABASE_URL"] %>
  pool: <%= ENV["DB_POOL"] || ENV["RAILS_MAX_THREADS"] || 5 %>
  adapter: postgresql

production:
  <<: *default

development:
  <<: *default

test:
  <<: *default

We also need our env files:

.env.docker.development

DATABASE_URL="postgres://myuser:letmein@db/appdb"
BUNDLE_PATH=/bundle/vendor

.env.docker.test

DATABASE_URL="postgres://myuser:letmein@db/appdb_test"
BUNDLE_PATH=/bundle/vendor

Note that the env vars could go in environment: key of our services, but I prefer to have them in separate files, as the apps that I usually work on end up accumulating many variables over time.

Now that we have a Dockerfile, and a docker compose configuration, we can build and run our application:

docker-compose up -d will build our application and run it. Once that's finished, (and it will take a while…) run docker-compose exec web rails db:migrate (or use the dce alias that I mentioned earlier). The application can be accessed at http://localhost:3000.

🎉 Congratulations - You're running Rails on Docker! 🎉

Using Rails outside of Docker

As mentioned before, using Rails outside of Docker has the advantage of native filesystem access, which is much faster than sharing files between the host OS and Docker in macOS. Also, rails is just rails, not docker-compose exec web rails. The tradeoff is that each developer must install their own Ruby and node dependencies (ruby versions, gems, nodejs, npm, etc) and manage them instead of allowing Docker to do that.

Here's how to run your supporting services in docker-compose while running Rails and node outside of Docker:

version: '3.8'

services:
  #
  # External Services
  #
  db:
    image: postgres:13
    restart: on-failure

    volumes:
      - pg_data:/var/lib/postgresql/data
      - ./db/dumps:/backups

    environment:
      - POSTGRES_PASSWORD=letmein
      - POSTGRES_USER=myuser
      - POSTGRES_DB=appdb

    ports:
      # note that we're mapping port 5432 here directly. 
      # please ensure that your database.yml file is 
      # configured to connect to # localhost:5432.
      # The default is a unix domain socket.
      - '5432:5432'
    
volumes:
  pg_data:

Running docker-compose up -d will start your backend services. Then to actually run the app, you would simply run the usual commands:

Note: in order to use env files, you need to add dotenv-rails to your Gemfile

bundle install 
yarn install
rails server

This is a very barebones simple example of just running postgres in Docker for your Rails app. In reality, you may have elasticsearch, redis, and memcache as well.

We need one more env file in order to be able to connect to the database: .env.development :

DATABASE_URL="postgres://myuser:letmein@127.0.0.1/appdb

If you're stuck or having trouble getting things running, it could be helpful to check out my example app.

Conclusion

Would I use Docker on every Rails application? No. If it has 3 or more services, or depends on specific versions of a service then yes. Docker and docker compose can save many hours per developer in onboarding costs. When the application is small, adding Docker isn't really worth it, as bringing the complexity of docker into a small application shifts the cost from onboarding to developer frustration.

Previous: Wrangling Complex Rails apps with Docker Part 1: The Dockerfile

Image Attributions

Photo by Cameron Venti on Unsplash

Posts in this series

  • Wrangling Complex Rails apps with Docker Part 1 : The Dockerfile
  • Wrangling Complex Rails apps with Docker Part 2 : Creating a docker-compose configuration
  • Wrangling Complex Rails apps with Docker Part 3 : Testing Rails under Docker

  • Category: Development
    Tags: Rails, Docker, Devops