The beauty of Kamal destinations is that you can avoid repeating yourself. Normally, I am not that interested in reducing duplication as it tends to create unnecessary abstractions, but in the case of server infrastructure, I want things as automated as possible.
Let's start with a basic config/deploy.yml
Let's start with a basic config/deploy.yml
lang-yaml # config/deploy.yml # Name of your application. Used to uniquely configure containers. service: myapp # Name of the container image. image: ghuser/myapp # Credentials for your image host. registry: server: ghcr.io username: - KAMAL_REGISTRY_USERNAME password: - KAMAL_REGISTRY_PASSWORD # Force the use of roles allow_empty_roles: false # Share the path to the rails assets asset_path: /rails/public/assets primary_role: web # Require a destination to be used to ensure we never deploy without require_destination: true builder: arch: arm64 cache: type: registry options: mode=max ssh: user: deploy volumes: - /data/storage:/rails/storage aliases: console: app exec --interactive --reuse "bin/rails console" shell: app exec --interactive --reuse "bash" logs: app logs -f dbc: app exec --interactive --reuse "bin/rails dbconsole"
I have tried commenting on the most important things; let's go through them one by one...
- You definitely want to require a destination in this setup
- You need an ssh user (this is used by the provisioning script in the previous article: https://mhenrixon.com/articles/server-provisioning-for-a-kamal-setup)
- You can put all shared things into config/deploy.yml and override it in your destination but let's keep things clean
Now let's create a destination called staging:
lang-yaml # config/deploy.staging.yml servers: # The key is the role you want to deploy to. web: hosts: - 139.188.99.233 # The key is the role you want to deploy to. job: hosts: - 139.188.99.233 cmd: bin/jobs env: clear: JOB_THREADS: 3 JOB_CONCURRENCY: 3 proxy: host: mhenrixon.com app_port: 3000 ssl: false response_timeout: 10 buffering: requests: true responses: true max_request_body: 40_000_000 max_response_body: 0 memory: 2_000_000 env: clear: # The db host uses the docker container `myapp-db` to connect DB_HOST: myapp-db PORT: 3000 RAILS_ENV: staging RAILS_LOG_TO_STDOUT: true RAILS_MAX_THREADS: 4 RAILS_MIN_THREADS: 4 RAILS_SERVE_STATIC_FILES: true # The redis url uses the docker container `myapp-redis` to connect REDIS_URL: redis://myapp-redis:6379/0 RUBY_YJIT_ENABLE: 1 WEB_CONCURRENCY: 4 secret: - POSTMARK_API_TOKEN - R2_ACCESS_KEY_ID - R2_BUCKET - R2_ENDPOINT - R2_REGION - R2_SECRET_ACCESS_KEY - RAILS_MASTER_KEY - SECRET_KEY_BASE accessories: db: image: postgres:16.4-alpine cmd: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200 directories: - /data/postgres:/var/lib/postgresql/data env: clear: POSTGRES_USER: myapp secret: - POSTGRES_PASSWORD files: - config/init.sql:/docker-entrypoint-initdb.d/setup.sql host: 139.188.99.233 # NOTE: Prevent connections on the public port, this locks down access to # docker and host. Public port won't be exposed through the firewall port: 127.0.0.1:5432:5432 db_backup: image: eeshugerman/postgres-backup-s3:16 host: 139.188.99.233 env: clear: SCHEDULE: '@daily' BACKUP_KEEP_DAYS: 365 POSTGRES_HOST: 139.188.99.233 POSTGRES_USER: myapp POSTGRES_DATABASE: myapp secret: - POSTGRES_PASSWORD - S3_ACCESS_KEY_ID - S3_SECRET_ACCESS_KEY - S3_BUCKET - S3_REGION - S3_PREFIX - S3_ENDPOINT redis: image: redis:7.2-alpine host: 139.188.99.233 cmd: "redis-server --appendonly yes --appendfsync everysec --save 900 1 300 10 60 1000 --dir /data" directories: - /data/redis:/data # NOTE: Prevent connections on the public port, this locks down access to # docker and host. Public port won't be exposed through the firewall port: 127.0.0.1:6379:6379
There is a lot to unpack here, let's start from the top.
Roles
The servers have many roles; they are the rails server and solid queue/jobs server in my case. In my case, the proxy is terminated in Cloudflare which handles my SLL. To avoid too many redirects I turn SSL off for the proxy.
Database host
The DB_HOST, in this case, is pointed to the myapp-db container and the way this works is the following. Remember the service: myapp part in config/deploy.yml? That becomes a prefix for all the docker containers and since my accessory is called db it becomes myapp-db.
Port mapping
We will come back to secrets in a bit but first let's talk about the port mapping of the accessories.
Exposing the ports of the accessories should be done with `port: 127.0.0.1:6379:6379`to prevent docker from blowing a hole in your firewall rules. It is still a great idea to use a Hetzner firewall but this prevents the problem.
Don't ask me how I know this but, If you have multiple destinations, that use the same database backups for restoring a staging environment. If you have the public port open, and you wrongly sloppy paste the production container IP in the accessory, you will blow out the production database without notification. You want to restrict accidentally doing that.
The IP address of your accessories can then be the public IP of your server without problems.
Exposing the ports of the accessories should be done with `port: 127.0.0.1:6379:6379`to prevent docker from blowing a hole in your firewall rules. It is still a great idea to use a Hetzner firewall but this prevents the problem.
Don't ask me how I know this but, If you have multiple destinations, that use the same database backups for restoring a staging environment. If you have the public port open, and you wrongly sloppy paste the production container IP in the accessory, you will blow out the production database without notification. You want to restrict accidentally doing that.
The IP address of your accessories can then be the public IP of your server without problems.
How do Kamal secrets work with destinations?
Kamal uses the .kamal/secrets file by default, but for destinations, it uses .kamal/secrets.staging so that's where we would have to put our secrets for our destination. It is also quite possible to use something like 1Password with secrets.
When you run kamal deploy -d staging it will pick the file .kamal/secrets.staging for you and if you have any shared environment variables that you use for all destinations, you can put those into .kamal/secrets-common and kamal will merge these for you.
When you run kamal deploy -d staging it will pick the file .kamal/secrets.staging for you and if you have any shared environment variables that you use for all destinations, you can put those into .kamal/secrets-common and kamal will merge these for you.