How to Create a Reverse Proxy in Nginx

Nginx reverse proxy functionality lets you route client requests to backend servers while presenting a single access point to users. For instance, common use cases include exposing Node.js or Python applications on standard ports (80/443), enabling SSL termination for applications without native HTTPS support, load balancing traffic across multiple application servers, and consolidating multiple services under one domain.

This guide demonstrates how to configure nginx as a reverse proxy, covering basic proxying with proper headers, load balancing methods with health checks, SSL termination, WebSocket support, and troubleshooting common issues. By the end, you will have a production-ready reverse proxy configuration with verified examples you can adapt to your infrastructure.

Prerequisites

Before configuring a reverse proxy, verify that nginx is installed and running on your system:

nginx -v

You should see output showing the nginx version:

nginx version: nginx/1.24.0

However, if nginx is not installed, follow the installation guide for your distribution:

Additionally, you need a backend service running on a different port that the reverse proxy will forward requests to. Common examples include Node.js applications on port 3000, Python/Django on port 8000, or Flask on port 5000. Before configuring the proxy, verify your backend responds locally:

curl http://localhost:3000

If this command returns your application’s response, then your backend is ready for proxying.

Create and Configure a Reverse Proxy in Nginx

Configure Basic Reverse Proxy

Configuration file locations differ by distribution. For example, Debian and Ubuntu use /etc/nginx/sites-available/ with symlinks to sites-enabled/, while Fedora, RHEL, Rocky Linux, and AlmaLinux load .conf files directly from /etc/nginx/conf.d/. Either approach keeps configurations organized and separate from the main nginx.conf file.

First, before creating a new configuration, back up any existing files you might be modifying:

sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup

Debian/Ubuntu:

sudo nano /etc/nginx/sites-available/reverse-proxy

Fedora/RHEL/Rocky Linux/AlmaLinux:

sudo nano /etc/nginx/conf.d/reverse-proxy.conf

Next, add the following basic reverse proxy configuration. This example proxies requests to a Node.js application running on localhost port 3000:

server {
    listen 80;
    server_name example.com www.example.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;

        # Essential proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Replace example.com with your actual domain and adjust the proxy_pass address to match your backend service. Common backend addresses include:

  • http://localhost:3000: Node.js applications (Express, Next.js)
  • http://localhost:8000: Python applications (Django, FastAPI)
  • http://localhost:5000: Flask applications
  • http://192.168.1.10:8080: Apache or other server on a different machine

Understanding Proxy Headers

Each proxy header serves a specific purpose for backend applications. Specifically, without these headers, backends cannot accurately determine client information:

  • proxy_http_version 1.1;: Uses HTTP/1.1 for the upstream connection, enabling keepalive connections and features like chunked transfer encoding. Required for WebSocket proxying.
  • proxy_set_header Host $host;: Passes the original hostname from the client request to the backend. Without this, backend servers receive nginx’s internal hostname, which breaks virtual host routing and session cookies tied to the domain.
  • proxy_set_header X-Real-IP $remote_addr;: Sends the client’s actual IP address to the backend. Applications use this for access logging, geolocation, and IP-based rate limiting.
  • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;: Appends the client IP to any existing X-Forwarded-For header, maintaining a chain of all proxies the request passed through. Essential when multiple proxies exist in the request path.
  • proxy_set_header X-Forwarded-Proto $scheme;: Indicates whether the original client connection used HTTP or HTTPS. Applications need this to generate correct redirect URLs and enforce security policies.

Backend applications must be configured to trust these headers only from known proxy IP addresses. Without validation, attackers can forge headers to bypass IP-based security controls.

On Debian/Ubuntu systems, enable the site by creating a symbolic link from sites-available to sites-enabled:

sudo ln -s /etc/nginx/sites-available/reverse-proxy /etc/nginx/sites-enabled/

In contrast, Fedora and RHEL-based systems automatically load all .conf files from /etc/nginx/conf.d/, so no additional linking is required.

Test and Apply Configuration

Before applying changes, test the configuration syntax to catch errors:

sudo nginx -t

If the syntax is correct, you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

However, if errors appear, nginx reports the exact file and line number. Common mistakes include missing semicolons, unclosed braces, and typos in directive names.

Once the test passes, reload nginx to apply changes without interrupting active connections:

sudo systemctl reload nginx

The reload command returns immediately with no output on success. Additionally, unlike restart, which terminates all connections, reload gracefully applies the new configuration while existing connections complete normally.

Verify Reverse Proxy Operation

Test the reverse proxy by sending a request through nginx to your backend. Use the Host header to simulate a request for your domain:

curl -H "Host: example.com" http://localhost

You should receive the same response as a direct backend request. For example, if your Node.js application returns JSON:

{"status":"ok","message":"Application running"}

Additionally, to verify headers are passed correctly, check your backend application logs. The logs should show the simulated client IP from X-Real-IP or the full proxy chain in X-Forwarded-For, rather than 127.0.0.1.

For external testing from another machine or after DNS is configured:

curl -I http://example.com

A successful response shows HTTP headers from your backend application, confirming the proxy is routing requests correctly.

Implement Load Balancing

When running multiple backend servers, nginx can distribute incoming traffic among them using an upstream block. Specifically, the upstream directive groups backend servers and applies a load balancing algorithm to determine which server receives each request.

Round-Robin Load Balancing

Round-robin is nginx’s default method, distributing requests evenly across all servers in sequence. Importantly, this works well when backend servers have similar capacity and request processing time is roughly equal.

Create or modify your configuration to include an upstream block with health check parameters:

upstream backend_servers {
    server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:3000 max_fails=3 fail_timeout=30s;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend_servers;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Enable keepalive connections to upstreams
        proxy_set_header Connection "";
    }
}

The max_fails=3 and fail_timeout=30s parameters configure passive health checking. Specifically, after 3 failed requests within 30 seconds, nginx marks the server as unavailable and stops sending requests to it for the next 30 seconds. Consequently, this prevents routing traffic to a crashed or unresponsive backend.

Test and reload the configuration:

sudo nginx -t && sudo systemctl reload nginx

Least Connections Method

The least connections method routes new requests to whichever server has the fewest active connections. Furthermore, this approach works better than round-robin when requests take varying amounts of time to complete, preventing any single server from being overwhelmed by long-running requests while others sit idle.

upstream backend_servers {
    least_conn;

    server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:3000 max_fails=3 fail_timeout=30s;
}

IP Hash for Session Persistence

For applications that store session data locally (rather than in shared storage), IP hash ensures requests from the same client consistently reach the same backend server. Specifically, nginx creates a hash from the client’s IP address and uses it to select a server.

upstream backend_servers {
    ip_hash;

    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
    server 192.168.1.12:3000;
}

IP hash becomes unreliable when many users share the same IP address, such as clients behind corporate NAT or mobile carrier networks. For more robust session persistence, consider using cookie-based sticky sessions or storing sessions in shared storage like Redis.

Weighted Load Balancing

When backend servers have different capacities, weight parameters direct proportionally more traffic to higher-capacity servers. By default, all servers have weight 1. For example, a server with weight 3 receives three times as many requests as a server with weight 1.

upstream backend_servers {
    server 192.168.1.10:3000 weight=3 max_fails=3 fail_timeout=30s;  # More powerful server
    server 192.168.1.11:3000 weight=2 max_fails=3 fail_timeout=30s;  # Medium server
    server 192.168.1.12:3000 weight=1 max_fails=3 fail_timeout=30s;  # Smaller server

    # Backup server - only receives requests when all primary servers fail
    server 192.168.1.13:3000 backup;
}

The backup parameter designates a server that only receives requests when all non-backup servers are unavailable, providing a failover mechanism for high availability.

Configure SSL Termination

SSL termination allows nginx to handle HTTPS connections and forward decrypted traffic to backend servers over plain HTTP. Notably, this approach centralizes certificate management, offloads encryption overhead from application servers, and simplifies backend configuration since applications do not need TLS support.

Obtain SSL Certificates with Certbot

For production deployments, obtain free SSL certificates from Let’s Encrypt using Certbot. To begin, first install both Certbot and the nginx plugin for your distribution.

Debian/Ubuntu:

sudo apt install certbot python3-certbot-nginx

Fedora/RHEL/Rocky Linux/AlmaLinux:

sudo dnf install certbot python3-certbot-nginx

Once installed, then request certificates for your domain. The --nginx flag allows Certbot to automatically verify domain ownership through your running nginx configuration:

sudo certbot certonly --nginx -d example.com -d www.example.com

After successful generation, then Certbot stores certificates in /etc/letsencrypt/live/example.com/:

  • fullchain.pem: Complete certificate chain including your certificate and intermediate certificates
  • privkey.pem: Private key for the certificate

For detailed Let’s Encrypt setup including automated renewal, see our secure Nginx with Let’s Encrypt guide.

Complete SSL Reverse Proxy Configuration

Update your reverse proxy configuration to include SSL settings with modern security parameters:

upstream backend_servers {
    server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 max_fails=3 fail_timeout=30s;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name example.com www.example.com;

    # Enable HTTP/2 (nginx 1.25.1+)
    http2 on;

    # SSL Certificate Configuration
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # SSL Security Settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
    ssl_prefer_server_ciphers off;

    # SSL Session Settings
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # HSTS (HTTP Strict Transport Security)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    location / {
        proxy_pass http://backend_servers;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection "";
    }
}

The http2 on; directive is the current syntax for nginx 1.25.1 and later. For older nginx versions (1.9.5 to 1.25.0), use listen 443 ssl http2; instead. Check your nginx version with nginx -v.

Overall, this configuration restricts TLS to versions 1.2 and 1.3, uses strong cipher suites, enables HTTP/2 for improved performance, and implements HSTS to instruct browsers to always use HTTPS.

Redirect HTTP to HTTPS

Add a separate server block to redirect all HTTP requests to HTTPS:

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # Redirect all HTTP requests to HTTPS
    return 301 https://$server_name$request_uri;
}

The 301 redirect tells browsers and search engines that this is a permanent redirect. Additionally, the $server_name and $request_uri variables preserve the original hostname and path.

Test and reload nginx:

sudo nginx -t && sudo systemctl reload nginx

Verify SSL termination by accessing your site with HTTPS:

curl -I https://example.com

The response should include the HSTS header and show your backend’s content. Your backend servers receive plain HTTP traffic while clients connect securely over HTTPS.

Configure WebSocket Proxy

WebSocket applications (real-time chat, live dashboards, collaborative editors) require specific proxy configuration because WebSocket uses an HTTP upgrade mechanism to establish persistent bidirectional connections.

upstream websocket_backend {
    server 192.168.1.10:3000;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://websocket_backend;
        proxy_http_version 1.1;

        # Standard proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket-specific headers
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Disable buffering for real-time data
        proxy_buffering off;

        # Extended timeout for persistent connections (24 hours)
        proxy_read_timeout 86400s;
        proxy_send_timeout 86400s;
    }
}

The Upgrade and Connection headers enable the HTTP upgrade mechanism that WebSocket requires. Furthermore, the extended timeouts (86400 seconds = 24 hours) prevent nginx from closing idle WebSocket connections, which would disconnect users during periods of inactivity.

Security Best Practices

Reverse proxies introduce security considerations beyond basic web server configuration. Therefore, these practices help maintain a secure setup:

Restrict Backend Server Access

Configure firewall rules so backend servers only accept connections from your nginx server, not directly from the internet. Consequently, this prevents attackers from bypassing the reverse proxy.

On UFW (Debian/Ubuntu):

sudo ufw allow from 192.168.1.5 to any port 3000 comment 'Allow nginx to backend'

On firewalld (Fedora/RHEL):

sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.5" port port="3000" protocol="tcp" accept'
sudo firewall-cmd --reload

Replace 192.168.1.5 with your nginx server’s IP address.

Validate Proxy Headers in Backend Applications

Backend applications must validate that X-Forwarded-* headers originate from trusted proxy servers. Otherwise, without validation, attackers can forge these headers to bypass IP-based access controls or hide their real IP address in logs.

Configure your application framework to trust headers only from your nginx server’s IP. For example, in Express.js:

app.set('trust proxy', '192.168.1.5')

Prevent Open Proxy Misconfiguration

Never use user-controlled variables in proxy_pass without strict validation. This dangerous configuration creates an open proxy:

# DANGEROUS - Do not use
location / {
    proxy_pass http://$arg_target;  # Attacker-controlled
}

For example, attackers could exploit this to proxy requests to arbitrary destinations, potentially using your server for attacks on other systems or accessing internal resources.

Implement Rate Limiting

Protect backend servers from abuse by implementing rate limiting at the nginx level:

# Define rate limit zone in http context
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

server {
    listen 80;
    server_name example.com;

    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        
        proxy_pass http://backend_servers;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

This configuration limits each client IP to 10 requests per second with a burst allowance of 20 requests. Additionally, the nodelay parameter serves burst requests immediately rather than queuing them. For comprehensive rate limiting strategies, see our nginx rate limiting guide.

Troubleshooting Common Issues

Reverse proxy configurations can encounter specific issues. Therefore, here are the most common problems with diagnostic steps and solutions:

502 Bad Gateway Error

This error indicates nginx cannot establish a connection to the backend server. To troubleshoot, diagnose by testing backend connectivity directly from the nginx server:

curl -v http://localhost:3000

Common causes and solutions:

  • Backend service not running: Verify the service is active with systemctl status service-name or ps aux | grep process-name
  • Wrong backend address or port: Confirm the proxy_pass URL matches where your application actually listens. Check with ss -tlnp | grep 3000
  • Firewall blocking connections: Ensure firewall rules allow nginx to reach the backend port
  • SELinux blocking connections (RHEL-based): Check audit2why < /var/log/audit/audit.log or temporarily set SELinux to permissive with sudo setenforce 0 to test

Check nginx error logs for specific connection failure messages:

sudo tail -50 /var/log/nginx/error.log

504 Gateway Timeout Error

Backend servers taking too long to respond trigger timeout errors. Therefore, for long-running operations like file uploads, report generation, or API calls to slow external services, increase timeout values:

location / {
    proxy_pass http://backend_servers;
    proxy_http_version 1.1;

    # Increase timeouts for slow operations
    proxy_connect_timeout 60s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

However, increasing timeouts is a temporary measure. Instead, investigate root causes such as slow database queries, unoptimized API calls, or insufficient server resources.

Backend Receives Wrong Hostname

If your backend application generates incorrect URLs, shows the wrong domain in links, or breaks session cookies, then the Host header is likely missing or incorrect. Verify the header is being passed by checking backend logs or adding temporary debugging.

Ensure your configuration includes:

proxy_set_header Host $host;

After that, test with:

curl -H "Host: example.com" http://localhost

Check backend logs to confirm they receive example.com as the hostname, not localhost or an IP address.

SSL Certificate Errors

Additionally, common SSL issues when configuring termination:

  • Certificate not found: Verify paths in your configuration match actual file locations. Check with ls -la /etc/letsencrypt/live/example.com/
  • Permission denied: The nginx worker process needs read access to certificate files. Certificates in /etc/letsencrypt/ are normally accessible, but verify by checking the user nginx runs as (www-data on Debian/Ubuntu, nginx on RHEL-based systems) can read the files
  • Certificate chain incomplete: Use fullchain.pem (not cert.pem) to include intermediate certificates
  • Mixed content warnings: Ensure backend generates HTTPS URLs by checking it receives the correct X-Forwarded-Proto header

Test SSL configuration with:

openssl s_client -connect example.com:443 -servername example.com

Remove Reverse Proxy Configuration

To remove a reverse proxy configuration and restore direct nginx operation:

Debian/Ubuntu:

# Disable the site by removing the symlink
sudo rm /etc/nginx/sites-enabled/reverse-proxy

# Optionally remove the configuration file entirely
sudo rm /etc/nginx/sites-available/reverse-proxy

# Test and reload
sudo nginx -t && sudo systemctl reload nginx

Fedora/RHEL/Rocky Linux/AlmaLinux:

# Remove the configuration file
sudo rm /etc/nginx/conf.d/reverse-proxy.conf

# Test and reload
sudo nginx -t && sudo systemctl reload nginx

After removal, requests to your server receive nginx’s default response or are handled by other configured sites. Additionally, if you configured SSL termination, the certificates remain in /etc/letsencrypt/ and can be reused or removed with sudo certbot delete --cert-name example.com.

Conclusion

You now have a production-ready nginx reverse proxy with proper header configuration, load balancing with health checks, SSL termination, WebSocket support, and security measures. Importantly, the setup routes client requests efficiently while isolating backend servers from direct internet access. For further optimization, explore our guides on security headers in nginx, gzip compression, rate limiting, and open file cache to enhance performance and security.

Leave a Comment