Wagtail Docker DevOps

Deploying Wagtail to Production: Docker, Nginx, and PostgreSQL

Getting Wagtail running locally is easy. Getting it running reliably in production — with proper static file serving, media uploads, environment secrets, database backups, and SSL — takes a bit more work. This guide covers a battle-tested Docker Compose stack: Nginx, Gunicorn, PostgreSQL, and Redis, with WhiteNoise for static files and Certbot for SSL.

1. Stack Overview

Internet Nginx :80 / :443 Gunicorn :8000 (internal) Wagtail Django app PostgreSQL :5432 Redis :6379 terminates SSL 4 workers WSGI app
Production stack: Nginx terminates SSL and proxies to Gunicorn (4 workers) which runs Wagtail. PostgreSQL stores content and sessions; Redis handles caching and Celery task queues.

2. Production Settings

# mysite/settings/production.py
import os
from .base import *

DEBUG = False

SECRET_KEY = os.environ['DJANGO_SECRET_KEY']

ALLOWED_HOSTS = [os.environ.get('WAGTAIL_SITE_DOMAIN', 'localhost')]

DATABASES = {
    'default': {
        'ENGINE':   'django.db.backends.postgresql',
        'NAME':     os.environ['POSTGRES_DB'],
        'USER':     os.environ['POSTGRES_USER'],
        'PASSWORD': os.environ['POSTGRES_PASSWORD'],
        'HOST':     os.environ.get('POSTGRES_HOST', 'db'),
        'PORT':     os.environ.get('POSTGRES_PORT', '5432'),
    }
}

CACHES = {
    'default': {
        'BACKEND':  'django_redis.cache.RedisCache',
        'LOCATION': os.environ.get('REDIS_URL', 'redis://redis:6379/1'),
    }
}

STATIC_ROOT  = '/app/staticfiles'
STATIC_URL   = '/static/'
MEDIA_ROOT   = '/app/media'
MEDIA_URL    = '/media/'

# WhiteNoise for serving static files without a CDN
MIDDLEWARE.insert(1, 'whitenoise.middleware.WhiteNoiseMiddleware')
STORAGES = {
    'staticfiles': {
        'BACKEND': 'whitenoise.storage.CompressedManifestStaticFilesStorage',
    },
    'default': {
        'BACKEND': 'django.core.files.storage.FileSystemStorage',
    },
}

WAGTAIL_SITE_NAME = os.environ.get('WAGTAIL_SITE_NAME', 'My Site')

LOGGING = {
    'version': 1,
    'handlers': {
        'console': {'class': 'logging.StreamHandler'},
    },
    'root': {'handlers': ['console'], 'level': 'WARNING'},
}

3. Dockerfile

FROM python:3.13-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PIP_NO_CACHE_DIR=1

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq-dev gcc libjpeg-dev zlib1g-dev libwebp-dev \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

RUN python manage.py collectstatic --noinput --settings=mysite.settings.production

EXPOSE 8000

CMD ["gunicorn", "mysite.wsgi:application", \
     "--bind", "0.0.0.0:8000", \
     "--workers", "4", \
     "--timeout", "120", \
     "--access-logfile", "-"]

The image libraries (libjpeg-dev, libwebp-dev) are required by Wagtail's image processing. Remove libpq-dev and gcc if you are using psycopg[binary] instead of building from source.


4. docker-compose.yml

services:

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    volumes:
      - postgres_data:/var/lib/postgresql/data
    env_file: .env
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: unless-stopped

  web:
    build: .
    restart: unless-stopped
    command: >
      sh -c "python manage.py migrate --settings=mysite.settings.production &&
             gunicorn mysite.wsgi:application --bind 0.0.0.0:8000 --workers 4 --timeout 120"
    env_file: .env
    environment:
      - DJANGO_SETTINGS_MODULE=mysite.settings.production
    volumes:
      - media_files:/app/media
    depends_on:
      db:
        condition: service_healthy
    expose:
      - "8000"

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/certs:/etc/nginx/certs:ro
      - media_files:/media:ro
      - certbot_www:/var/www/certbot:ro
    depends_on:
      - web

  certbot:
    image: certbot/certbot
    volumes:
      - ./nginx/certs:/etc/letsencrypt
      - certbot_www:/var/www/certbot

volumes:
  postgres_data:
  media_files:
  certbot_www:

5. Nginx Configuration

# nginx/conf.d/wagtail.conf
server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;

    # Certbot ACME challenge
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl http2;
    server_name yourdomain.com www.yourdomain.com;

    ssl_certificate     /etc/nginx/certs/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/live/yourdomain.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    client_max_body_size 50M;  # Allow large image uploads

    location /static/ {
        alias /app/staticfiles/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    location /media/ {
        alias /media/;
        expires 7d;
    }

    location / {
        proxy_pass         http://web:8000;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 120s;
    }
}

6. Static Files with WhiteNoise

WhiteNoise serves compressed, fingerprinted static files directly from Gunicorn — no CDN required for most sites. Add it to requirements.txt:

pip install whitenoise[brotli]

The CompressedManifestStaticFilesStorage backend generates filename.HASH.ext filenames and serves them with long-lived cache headers. The hash changes when file content changes, so browsers always get the latest version.

Run collectstatic before starting Gunicorn — the Dockerfile does this during image build so it only runs once per deploy, not on every container start.


7. Media Files

Wagtail stores uploaded images and documents in MEDIA_ROOT. In Docker Compose, the media_files named volume is shared between the web and nginx containers. Nginx serves media directly, bypassing Gunicorn.

For production sites with high image traffic, replace the volume mount with an S3-backed storage using django-storages[s3]:

# settings/production.py — S3 media storage
STORAGES = {
    'default': {
        'BACKEND': 'storages.backends.s3boto3.S3Boto3Storage',
    },
    'staticfiles': {
        'BACKEND': 'whitenoise.storage.CompressedManifestStaticFilesStorage',
    },
}

AWS_STORAGE_BUCKET_NAME = os.environ['AWS_STORAGE_BUCKET_NAME']
AWS_S3_REGION_NAME      = os.environ.get('AWS_S3_REGION_NAME', 'us-east-1')
AWS_S3_FILE_OVERWRITE   = False
AWS_DEFAULT_ACL         = None

8. Environment Variables

# .env (never commit to git — add to .gitignore)
DJANGO_SECRET_KEY=your-50-char-random-string-here
POSTGRES_DB=wagtail_prod
POSTGRES_USER=wagtail
POSTGRES_PASSWORD=strong-random-password
POSTGRES_HOST=db
REDIS_URL=redis://redis:6379/1
WAGTAIL_SITE_NAME=My Wagtail Site
WAGTAIL_SITE_DOMAIN=yourdomain.com

Generate a Django secret key with the Django Secret Key Generator or via Python:

python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"

9. First Deployment

# Pull code to server, then:
docker compose pull
docker compose build --no-cache
docker compose up -d

# Create superuser
docker compose exec web python manage.py createsuperuser \
  --settings=mysite.settings.production

# Check logs
docker compose logs -f web

For subsequent deploys:

git pull
docker compose build web
docker compose up -d --no-deps web
docker compose exec web python manage.py migrate --settings=mysite.settings.production

10. SSL with Certbot

Start without SSL first (HTTP only), get the certificate, then enable HTTPS:

# Issue the certificate (while nginx is serving HTTP on port 80)
docker compose run --rm certbot certonly \
  --webroot \
  --webroot-path /var/www/certbot \
  --email your@email.com \
  --agree-tos \
  --no-eff-email \
  -d yourdomain.com -d www.yourdomain.com

# Now switch to the HTTPS nginx config and reload
docker compose exec nginx nginx -s reload

Add a cron job or systemd timer to auto-renew:

# /etc/cron.d/certbot-renew
0 3 * * * root docker compose -f /srv/mysite/docker-compose.yml run --rm certbot renew --quiet \
  && docker compose -f /srv/mysite/docker-compose.yml exec nginx nginx -s reload

Your Wagtail site is now running in production with HTTPS, compressed static files, PostgreSQL, and Redis. Next steps: add database backups via pg_dump to S3, set up Celery for background tasks, and add a monitoring solution like Sentry.