I deployed my first Django REST Framework app to production last month. Took me way longer than it should have.
The problem? Every tutorial I found was either "here's how to run Django locally" or "just use Heroku." Nothing in between. Nothing that actually explained how to get your API running on a real VPS with Docker, Nginx, SSL, and all the stuff production apps actually need.
So I figured it out the hard way. And now I'm writing the guide I wish I had.
What We're Building
By the end of this guide, you'll have:
- A Django REST Framework app running in Docker containers
- PostgreSQL database (also in Docker)
- Nginx reverse proxy handling requests and serving static files
- SSL certificate from Let's Encrypt (free HTTPS)
- Everything running on a cheap VPS ($5-10/month)
This isn't a "deploy to Heroku in 5 minutes" tutorial. This is the real deal - what actual production apps use.
The Architecture
Before we dive in, let's understand what we're setting up:
User Request
│
▼
┌─────────────────────────────────────────────────────────┐
│ Your VPS │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Docker Compose │ │
│ │ │ │
│ │ ┌─────────┐ ┌──────────────┐ ┌───────────┐ │ │
│ │ │ Nginx │───▶│ Gunicorn │───▶│ PostgreSQL│ │ │
│ │ │ :80/443│ │ (Django) │ │ :5432 │ │ │
│ │ └─────────┘ │ :8000 │ └───────────┘ │ │
│ │ │ └──────────────┘ │ │
│ │ │ │ │ │
│ │ ▼ ▼ │ │
│ │ ┌─────────┐ ┌──────────────┐ │ │
│ │ │ Static │ │ Media │ │ │
│ │ │ Files │ │ Uploads │ │ │
│ │ └─────────┘ └──────────────┘ │ │
│ └────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Here's how requests flow:
- User hits your domain (yourdomain.com)
- Nginx receives the request on port 80/443
- If it's a static file, Nginx serves it directly (fast)
- If it's an API request, Nginx forwards it to Gunicorn
- Gunicorn runs your Django app and talks to PostgreSQL
- Response goes back through the chain
Why this setup? Nginx is ridiculously good at serving static files and handling SSL. Gunicorn is built for running Python apps in production. Docker keeps everything isolated and reproducible. PostgreSQL because... it's PostgreSQL.
Prerequisites
What you need before starting:
- A working Django REST Framework project (tested locally)
- A VPS (DigitalOcean, Linode, AWS EC2, Vultr - any works)
- A domain name (optional but recommended for SSL)
- Basic command line knowledge
I'm using Ubuntu 24.04 LTS for this guide, but any modern Linux works. Adjust package manager commands if you're on a different distro.
Step 1: Organize Your Django Settings
Most tutorials have a single settings.py. That works for local development but becomes a mess in production. Let's split it up.
Create this structure:
your_project/
└── settings/
├── __init__.py
├── base.py
├── development.py
└── production.py
The __init__.py automatically loads the right settings based on an environment variable:
# settings/__init__.py
import os
environment = os.environ.get('DJANGO_ENV', 'development')
if environment == 'production':
from .production import *
elif environment == 'staging':
from .staging import *
else:
from .development import *
This file reads the DJANGO_ENV environment variable and imports the corresponding settings module. If no environment is set, it defaults to development - safe for local work. The * import pulls all settings from the chosen module into Django's settings namespace.
Your base.py contains everything shared across environments - installed apps, middleware, REST framework settings, etc. Move everything from your current settings.py here, except database config and debug settings.
Then production.py:
# settings/production.py
import dj_database_url
from .base import *
DEBUG = False
ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=Csv())
# Database - PostgreSQL
DATABASES = {
'default': dj_database_url.config(
default=config('DATABASE_URL'),
conn_max_age=600,
conn_health_checks=True,
)
}
# Security - these matter in production
SECURE_SSL_REDIRECT = True
SECURE_HSTS_SECONDS = 31536000 # 1 year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Cookies
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
# CORS - strict in production
CORS_ALLOW_ALL_ORIGINS = False
CORS_ALLOWED_ORIGINS = config('CORS_ALLOWED_ORIGINS', cast=Csv())
# Email - real SMTP
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = config('EMAIL_HOST')
EMAIL_PORT = config('EMAIL_PORT', cast=int)
EMAIL_HOST_USER = config('EMAIL_HOST_USER')
EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD')
EMAIL_USE_TLS = True
This imports shared settings from base.py, then overrides production-specific values. The dj_database_url package parses database connection strings. The config() function (from python-decouple) reads values from environment variables. The SECURE_* settings force HTTPS and tell browsers to always use encrypted connections. Secure cookies ensure session data is only sent over HTTPS. CORS settings restrict which domains can make API requests.
Step 2: Create the Dockerfile
Here's where it gets interesting. We're using a multi-stage Docker build to keep the final image small.
# Dockerfile
# Stage 1: Build dependencies
FROM python:3.14-slim as builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Create virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Install Python dependencies
COPY requirements/base.txt requirements/base.txt
COPY requirements/production.txt requirements/production.txt
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements/production.txt
# Stage 2: Runtime
FROM python:3.14-slim as runtime
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PATH="/opt/venv/bin:$PATH"
WORKDIR /app
# Install runtime dependencies only
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy virtual environment from builder
COPY --from=builder /opt/venv /opt/venv
# Create non-root user
RUN useradd --create-home appuser
# Copy application code
COPY --chown=appuser:appuser . /app/
# Create directories
RUN mkdir -p /app/staticfiles /app/media /app/logs && \
chown -R appuser:appuser /app/staticfiles /app/media /app/logs
# Switch to non-root user
USER appuser
# Collect static files
RUN SECRET_KEY=build-time-dummy \
ALLOWED_HOSTS=localhost \
DATABASE_URL=sqlite:///dummy.db \
python manage.py collectstatic --noinput
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health/ || exit 1
# Run gunicorn with dynamic workers based on CPU cores
# Formula: (2 × CPU cores) + 1
CMD ["sh", "-c", "gunicorn your_project.wsgi:application \
--bind 0.0.0.0:8000 \
--workers $((2 * $(nproc) + 1)) \
--threads 2 \
--worker-class gthread \
--access-logfile - \
--error-logfile -"]
The Dockerfile uses a multi-stage build. The first stage (builder) installs build tools like compilers needed to install Python packages with C extensions. It creates a virtual environment and installs all dependencies. The second stage (runtime) starts fresh with a clean image, copies only the compiled virtual environment, and skips the build tools - this keeps the final image around 150MB instead of 800MB.
We create a non-root user (appuser) because running containers as root is a security risk. The collectstatic command uses dummy environment variables because Django needs them to run, but collectstatic doesn't actually use database connections. The HEALTHCHECK tells Docker how to verify the container is working.
The Gunicorn workers formula (2 × CPU cores) + 1 is the recommended approach from Gunicorn docs. On a 2-core VPS, this gives you 5 workers. The nproc command returns the number of CPU cores available to the container. Note: more workers ≠ better performance. Gunicorn typically needs only 4-12 workers to handle heavy traffic - monitor and adjust under load.
The --bind flag tells Gunicorn where to listen for incoming connections from Nginx. It's how Nginx and Gunicorn communicate - Nginx receives requests from users and forwards them to Gunicorn. There are two ways to set this up:
- TCP binding (
--bind 0.0.0.0:8000): Gunicorn listens on a network port. Nginx connects viahttp://web:8000over the Docker network. Simple, works across containers. - Unix socket (
--bind unix:/run/gunicorn.sock): Gunicorn creates a file on disk. Nginx connects through that file. Faster (~5-10%) but requires shared volume between containers.
For Docker, TCP is the standard approach - containers are isolated, so they communicate over the network. Unix sockets add complexity with minimal benefit. For non-Docker deployments where Nginx and Gunicorn run on the same machine, Unix sockets are preferred for the slight performance gain.
The dummy environment variables during collectstatic are intentional. Django needs certain settings to run any management command, but collectstatic doesn't actually use them. It just gathers CSS and JS files.
Step 3: Docker Compose for Orchestration
Docker Compose ties everything together. Create docker-compose.production.yml:
# docker-compose.production.yml
services:
db:
image: postgres:18-alpine
container_name: myapp_db
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app_network
web:
build:
context: .
dockerfile: Dockerfile
container_name: myapp_web
restart: unless-stopped
env_file:
- .env.production
environment:
- DJANGO_ENV=production
volumes:
- static_volume:/app/staticfiles
- media_volume:/app/media
- logs_volume:/app/logs
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- app_network
command: >
sh -c "python manage.py migrate --noinput &&
gunicorn your_project.wsgi:application
--bind 0.0.0.0:8000
--workers $((2 * $(nproc) + 1))
--threads 2
--worker-class gthread"
nginx:
image: nginx:alpine
container_name: myapp_nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.production.conf:/etc/nginx/conf.d/default.conf:ro
- static_volume:/app/staticfiles:ro
- media_volume:/app/media:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
depends_on:
web:
condition: service_healthy
networks:
- app_network
volumes:
postgres_data:
static_volume:
media_volume:
logs_volume:
networks:
app_network:
driver: bridge
This defines three services. The db service runs PostgreSQL with a health check that verifies the database is accepting connections. The web service builds your Django app, waits for the database to be healthy before starting (depends_on with condition), then runs migrations and starts Gunicorn. The nginx service acts as a reverse proxy, handling incoming requests on ports 80 and 443.
Named volumes (postgres_data, static_volume, etc.) persist data between container restarts. The shared volumes let Nginx serve static files directly without going through Django. The app_network allows containers to communicate using service names (like web:8000) as hostnames.
Step 4: Nginx Configuration with SSL
Create nginx/nginx.production.conf:
# nginx/nginx.production.conf
upstream django {
server web:8000;
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
# Let's Encrypt challenge
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# HTTPS server
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL certificates
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
# SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
# Security headers
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
# Max upload size
client_max_body_size 10M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/javascript application/json application/xml;
# Static files - Nginx serves directly
location /static/ {
alias /app/staticfiles/;
expires 30d;
add_header Cache-Control "public, immutable";
}
# Media files
location /media/ {
alias /app/media/;
expires 7d;
add_header Cache-Control "public";
}
# Health check
location /health/ {
access_log off;
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Django application
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Deny hidden files
location ~ /\. {
deny all;
}
}
The upstream block defines where Django is running. The first server block listens on port 80 (HTTP) and redirects everything to HTTPS, except Let's Encrypt challenge requests needed for certificate renewal. The second server block handles HTTPS on port 443 with SSL certificates from Let's Encrypt.
Security headers like X-Frame-Options and Strict-Transport-Security protect against clickjacking and downgrade attacks. The /static/ and /media/ locations serve files directly from the filesystem - much faster than routing through Django. Gzip compression reduces response sizes. The final location block proxies all other requests to Django, passing along important headers like the real client IP.
Step 5: Environment Variables
Create .env.production (never commit this file):
# Django
DJANGO_ENV=production
SECRET_KEY=your-super-secret-key-generate-a-new-one
DEBUG=False
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
# Database
POSTGRES_DB=myapp_db
POSTGRES_USER=myapp_user
POSTGRES_PASSWORD=super-secure-password-here
DATABASE_URL=postgres://myapp_user:super-secure-password-here@db:5432/myapp_db
# Email (using Gmail as example)
EMAIL_HOST=smtp.gmail.com
EMAIL_PORT=587
[email protected]
EMAIL_HOST_PASSWORD=your-app-specific-password
DEFAULT_FROM_EMAIL=Your App <[email protected]>
# CORS
CORS_ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
# Optional: Sentry for error tracking
SENTRY_DSN=
These environment variables configure your production app. The SECRET_KEY is used for cryptographic signing - generate a unique one and never commit it. DATABASE_URL follows a standard format: postgres://user:password@host:port/database. The @db hostname works because Docker Compose creates a network where services can reach each other by name.
To generate a secure secret key:
python -c "import secrets; print(secrets.token_urlsafe(50))"
Add .env.production to your .gitignore. Seriously. Don't commit secrets to git.
Step 6: Set Up Your VPS
SSH into your server:
ssh root@your-server-ip
First, update everything:
apt update && apt upgrade -y
Install Docker:
# Install prerequisites
apt install -y apt-transport-https ca-certificates curl software-properties-common
# Add Docker's GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify installation
docker --version
docker compose version
This installs Docker and Docker Compose plugin from Docker's official repository. We add their GPG key for package verification, then add the repository to apt sources. The docker-compose-plugin gives us the newer docker compose command (with a space) instead of the older docker-compose.
Create a non-root user for running your app:
adduser deploy
usermod -aG docker deploy
su - deploy
Running as root is risky. We create a deploy user and add them to the docker group so they can run Docker commands without sudo.
Step 7: Clone and Configure
As the deploy user:
# Clone your repository
git clone https://github.com/yourusername/your-project.git
cd your-project
# Create production environment file
cp .env.example .env.production
nano .env.production
# Fill in your production values
Step 8: SSL Certificates with Let's Encrypt
Before running the full stack, we need SSL certificates. Here's the chicken-and-egg problem: Nginx needs certificates to start with SSL, but Certbot needs Nginx running to verify your domain.
Solution: Start with a temporary HTTP-only Nginx config, get the certificate, then switch to the full config.
Create a temporary Nginx config:
# nginx/nginx.temp.conf
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 200 'OK';
add_header Content-Type text/plain;
}
}
This minimal config just handles the Let's Encrypt verification challenge. It serves files from /var/www/certbot for the /.well-known/acme-challenge/ path that Certbot uses to prove you own the domain.
Start Nginx with the temp config:
docker run -d --name temp-nginx \
-p 80:80 \
-v $(pwd)/nginx/nginx.temp.conf:/etc/nginx/conf.d/default.conf:ro \
-v /var/www/certbot:/var/www/certbot \
nginx:alpine
Install Certbot and get certificates:
# Install certbot
apt install -y certbot
# Create webroot directory
mkdir -p /var/www/certbot
# Get certificate
certbot certonly --webroot \
-w /var/www/certbot \
-d yourdomain.com \
-d www.yourdomain.com \
--email [email protected] \
--agree-tos \
--non-interactive
Certbot uses the webroot method - it places a file in /var/www/certbot and Let's Encrypt servers try to access it via your domain. If successful, they issue the certificate. The -d flags specify which domains to include in the certificate.
Stop the temp Nginx:
docker stop temp-nginx && docker rm temp-nginx
Set up automatic certificate renewal:
# Add to crontab (runs twice daily)
crontab -e
# Add this line:
0 0,12 * * * certbot renew --quiet && docker exec myapp_nginx nginx -s reload
Let's Encrypt certificates expire after 90 days. This cron job runs twice daily to check if renewal is needed. After renewal, it reloads Nginx to pick up the new certificate.
Step 9: Deploy!
Now for the moment of truth:
# Build and start everything
docker compose -f docker-compose.production.yml up -d --build
# Watch the logs
docker compose -f docker-compose.production.yml logs -f
Wait for all containers to be healthy (usually 30-60 seconds):
docker compose -f docker-compose.production.yml ps
You should see all three containers (db, web, nginx) with status "healthy" or "Up".
Step 10: Create Superuser and Test
# Create admin user
docker compose -f docker-compose.production.yml exec web python manage.py createsuperuser
Now test your endpoints:
https://yourdomain.com/health/- Should return {"status": "healthy"}https://yourdomain.com/admin/- Django adminhttps://yourdomain.com/swagger/- API documentation (if you have drf-yasg)
Check SSL grade at SSL Labs. You should get an A or A+.
Common Commands You'll Use
Here's a cheat sheet for managing your deployment:
# View logs
docker compose -f docker-compose.production.yml logs -f
docker compose -f docker-compose.production.yml logs -f web # Just Django
# Restart everything
docker compose -f docker-compose.production.yml restart
# Run Django management commands
docker compose -f docker-compose.production.yml exec web python manage.py migrate
docker compose -f docker-compose.production.yml exec web python manage.py shell
docker compose -f docker-compose.production.yml exec web python manage.py createsuperuser
# Access PostgreSQL
docker compose -f docker-compose.production.yml exec db psql -U myapp_user -d myapp_db
# Backup database
docker compose -f docker-compose.production.yml exec db pg_dump -U myapp_user myapp_db > backup_$(date +%Y%m%d).sql
# Update code and redeploy
git pull origin main
docker compose -f docker-compose.production.yml up -d --build
# Check resource usage
docker stats
These are commands you'll use regularly. The exec command runs commands inside a running container. logs -f follows log output in real-time. pg_dump creates a database backup. docker stats shows CPU and memory usage for all containers.
Updating Your App
When you push new code:
# SSH into server
ssh deploy@your-server-ip
# Navigate to project
cd your-project
# Pull latest code
git pull origin main
# Rebuild and restart
docker compose -f docker-compose.production.yml up -d --build
# Run migrations if needed
docker compose -f docker-compose.production.yml exec web python manage.py migrate
The --build flag rebuilds the Docker image with your new code. Docker is smart enough to use cached layers for unchanged parts.
Troubleshooting
Container won't start
# Check logs
docker compose -f docker-compose.production.yml logs web
# Common causes:
# - Missing .env.production file
# - Invalid DATABASE_URL
# - Django import errors
502 Bad Gateway from Nginx
This means Nginx can't reach Django. Check if the web container is healthy:
docker compose -f docker-compose.production.yml ps
docker compose -f docker-compose.production.yml logs web
Static files not loading
# Rebuild static files
docker compose -f docker-compose.production.yml exec web python manage.py collectstatic --noinput
# Check Nginx logs
docker compose -f docker-compose.production.yml logs nginx
Database connection errors
The web container might be starting before the database is ready. The depends_on with health check should handle this, but sometimes it needs more time:
# Check if db is healthy
docker compose -f docker-compose.production.yml ps db
# Look at db logs
docker compose -f docker-compose.production.yml logs db
Out of memory on small VPS
If you're on a 1GB VPS and running out of memory, the dynamic workers formula might create too many workers. Override it with a fixed number:
# Check memory
free -h
# Override dynamic workers with a fixed number in docker-compose.production.yml
# Change: --workers $((2 * $(nproc) + 1)) to --workers 2
# Or for very small instances: --workers 1
Security Checklist
Before going live, make sure:
- DEBUG=False in production (seriously, check twice)
- SECRET_KEY is unique and not in git
- ALLOWED_HOSTS only includes your domain
- CORS_ALLOWED_ORIGINS is restricted
- Database password is strong and unique
- Firewall only allows ports 22, 80, 443
- SSH uses key-based auth (disable password auth)
- SSL certificate is valid
Setting Up a Firewall
If you haven't already:
# Allow SSH
ufw allow 22
# Allow HTTP and HTTPS
ufw allow 80
ufw allow 443
# Enable firewall
ufw enable
# Check status
ufw status
UFW (Uncomplicated Firewall) blocks all incoming traffic except the ports you allow. Port 22 is SSH, 80 is HTTP, 443 is HTTPS. This prevents unauthorized access to other services running on your server.
Optional: Error Tracking with Sentry
In production, you want to know when things break. Sentry is free for small projects.
Add to your production.py:
SENTRY_DSN = config('SENTRY_DSN', default='')
if SENTRY_DSN:
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(
dsn=SENTRY_DSN,
integrations=[DjangoIntegration()],
traces_sample_rate=0.1,
send_default_pii=False,
)
This initializes Sentry only if a DSN is provided. The traces_sample_rate controls how many requests get performance monitoring (0.1 = 10%). Setting send_default_pii to False prevents accidentally sending user data to Sentry.
What We Built
Let's recap what you now have:
- Django REST Framework running in a Docker container
- PostgreSQL database with persistent storage
- Nginx handling SSL and serving static files
- Automatic HTTPS with Let's Encrypt
- Health checks for all services
- Non-root user in containers (security)
- Proper logging
- Easy deployment workflow
This is a real production setup. Not a tutorial shortcut. You can run real apps with real users on this.
Next Steps
Things you might want to add:
- CI/CD: GitHub Actions to auto-deploy on push to main
- Monitoring: Set up uptime monitoring (UptimeRobot is free)
- Backups: Automated daily database backups to cloud storage
- CDN: Put Cloudflare in front for caching and DDoS protection
- Managed database: Move PostgreSQL to AWS RDS or DigitalOcean managed database for better reliability
The setup I showed you handles most small-to-medium apps just fine. Scale when you need to, not before.
Still got questions? I'm @maniishbhusal on Twitter. Drop me a message - always happy to help fellow developers figure this stuff out.